Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Data Collection | Definition, Methods & Examples

Data Collection | Definition, Methods & Examples

Published on June 5, 2020 by Pritha Bhandari . Revised on June 21, 2023.

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem .

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The  aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

Table of contents

Step 1: define the aim of your research, step 2: choose your data collection method, step 3: plan your data collection procedures, step 4: collect the data, other interesting articles, frequently asked questions about data collection.

Before you start the process of data collection, you need to identify exactly what you want to achieve. You can start by writing a problem statement : what is the practical or scientific issue that you want to address and why does it matter?

Next, formulate one or more research questions that precisely define what you want to find out. Depending on your research questions, you might need to collect quantitative or qualitative data :

  • Quantitative data is expressed in numbers and graphs and is analyzed through statistical methods .
  • Qualitative data is expressed in words and analyzed through interpretations and categorizations.

If your aim is to test a hypothesis , measure something precisely, or gain large-scale statistical insights, collect quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights into a specific context, collect qualitative data. If you have several aims, you can use a mixed methods approach that collects both types of data.

  • Your first aim is to assess whether there are significant differences in perceptions of managers across different departments and office locations.
  • Your second aim is to gather meaningful feedback from employees to explore new ideas for how managers can improve.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Based on the data you want to collect, decide which method is best suited for your research.

  • Experimental research is primarily a quantitative method.
  • Interviews , focus groups , and ethnographies are qualitative methods.
  • Surveys , observations, archival research and secondary data collection can be quantitative or qualitative methods.

Carefully consider what method you will use to gather data that helps you directly answer your research questions.

Data collection methods
Method When to use How to collect data
Experiment To test a causal relationship. Manipulate variables and measure their effects on others.
Survey To understand the general characteristics or opinions of a group of people. Distribute a list of questions to a sample online, in person or over-the-phone.
Interview/focus group To gain an in-depth understanding of perceptions or opinions on a topic. Verbally ask participants open-ended questions in individual interviews or focus group discussions.
Observation To understand something in its natural setting. Measure or survey a sample without trying to affect them.
Ethnography To study the culture of a community or organization first-hand. Join and participate in a community and record your observations and reflections.
Archival research To understand current or historical events, conditions or practices. Access manuscripts, documents or records from libraries, depositories or the internet.
Secondary data collection To analyze data from populations that you can’t access first-hand. Find existing datasets that have already been collected, from sources such as government agencies or research organizations.

When you know which method(s) you are using, you need to plan exactly how you will implement them. What procedures will you follow to make accurate observations or measurements of the variables you are interested in?

For instance, if you’re conducting surveys or interviews, decide what form the questions will take; if you’re conducting an experiment, make decisions about your experimental design (e.g., determine inclusion and exclusion criteria ).

Operationalization

Sometimes your variables can be measured directly: for example, you can collect data on the average age of employees simply by asking for dates of birth. However, often you’ll be interested in collecting data on more abstract concepts or variables that can’t be directly observed.

Operationalization means turning abstract conceptual ideas into measurable observations. When planning how you will collect data, you need to translate the conceptual definition of what you want to study into the operational definition of what you will actually measure.

  • You ask managers to rate their own leadership skills on 5-point scales assessing the ability to delegate, decisiveness and dependability.
  • You ask their direct employees to provide anonymous feedback on the managers regarding the same topics.

You may need to develop a sampling plan to obtain data systematically. This involves defining a population , the group you want to draw conclusions about, and a sample, the group you will actually collect data from.

Your sampling method will determine how you recruit participants or obtain measurements for your study. To decide on a sampling method you will need to consider factors like the required sample size, accessibility of the sample, and timeframe of the data collection.

Standardizing procedures

If multiple researchers are involved, write a detailed manual to standardize data collection procedures in your study.

This means laying out specific step-by-step instructions so that everyone in your research team collects data in a consistent way – for example, by conducting experiments under the same conditions and using objective criteria to record and categorize observations. This helps you avoid common research biases like omitted variable bias or information bias .

This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.

Creating a data management plan

Before beginning data collection, you should also decide how you will organize and store your data.

  • If you are collecting data from people, you will likely need to anonymize and safeguard the data to prevent leaks of sensitive information (e.g. names or identity numbers).
  • If you are collecting data via interviews or pencil-and-paper formats, you will need to perform transcriptions or data entry in systematic ways to minimize distortion.
  • You can prevent loss of data by having an organization system that is routinely backed up.

Finally, you can implement your chosen methods to measure or observe the variables you are interested in.

The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1–5. The data produced is numerical and can be statistically analyzed for averages and patterns.

To ensure that high quality data is recorded in a systematic way, here are some best practices:

  • Record all relevant information as and when you obtain data. For example, note down whether or how lab equipment is recalibrated during an experimental study.
  • Double-check manual data entry for errors.
  • If you collect quantitative data, you can assess the reliability and validity to get an indication of your data quality.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is the data gathering procedure in research

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 21). Data Collection | Definition, Methods & Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/data-collection/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, sampling methods | types, techniques & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Step-by-Step Guide: Data Gathering in Research Projects

  • by Willie Wilson
  • October 22, 2023

Welcome to our ultimate guide on data gathering in research projects! Whether you’re an aspiring researcher or a seasoned professional, this blog post will equip you with the essential steps to effectively gather data. In this ever-evolving digital age, data has become the cornerstone of decision-making and problem-solving in various fields. So, understanding the process of data gathering is crucial to ensure accurate and reliable results.

In this article, we will delve into the ten key steps involved in data gathering. From formulating research questions to selecting the right data collection methods , we’ll cover everything you need to know to conduct a successful research project. So, grab your notebook and get ready to embark on an exciting journey of data exploration!

Let’s dive right in and discover the step-by-step process of data gathering, enabling you to enhance your research skills and deliver impactful results.

10 Steps to Master Data Gathering

Data gathering is a crucial step in any research or analysis process. It provides the foundation for informed decision-making , insightful analysis, and meaningful insights. Whether you’re a data scientist, a market researcher, or just someone curious about a specific topic, understanding the steps involved in data gathering is essential. So, let’s dive into the 10 steps you need to master to become a data gathering wizard!

Step 1: Define Your Objective

First things first, clearly define your objective. Ask yourself what you’re trying to achieve with the data you gather. Are you looking for trends, patterns, or correlations? Do you want to support a hypothesis or disprove it? Having a clear goal in mind will help you stay focused and ensure that your data gathering efforts are purposeful.

Step 2: Determine Your Data Sources

Once you know what you’re after, it’s time to identify your data sources. Will you be collecting primary data through surveys, interviews, or experiments? Or will you rely on secondary sources like databases, research papers, or official reports? Consider the pros and cons of each source and choose the ones that align best with your objective.

Step 3: Create a Data Collection Plan

Planning is key! Before you start gathering data, create a detailed data collection plan. Outline the key variables you want to measure, determine the sampling technique, and devise a timeline. This plan will serve as your roadmap throughout the data gathering process and ensure that you don’t miss any important steps or variables.

Step 4: Design Your Data Collection Tools

Now that your plan is in place, it’s time to design the tools you’ll use to collect the data. This could be a survey questionnaire, an interview script, or an observation checklist . Remember to keep your questions clear, concise, and unbiased to ensure high-quality data.

Step 5: Pretest Your Tools

Before you launch into full-scale data collection, it’s wise to pretest your tools. This involves trying out your survey questionnaire, interview script, or observation checklist on a small sample of respondents. This step allows you to identify any issues or ambiguities in your tools and make necessary revisions.

Step 6: Collect Your Data

Now comes the exciting part—collecting the actual data! Deploy your data collection tools on your chosen sample and gather the information you need. Be organized, diligent, and ethical in your data collection, ensuring that you respect respondents’ privacy and confidentiality.

Step 7: Clean and Validate Your Data

Raw data can be messy. Before you start analyzing it, you need to clean and validate it. Remove any duplicate entries, correct any errors or inconsistencies, and check for data integrity. This step is critical to ensure the accuracy and reliability of your findings.

Step 8: Analyze Your Data

With clean and validated data in hand, it’s time to analyze! Use statistical techniques , visualization tools, or any other relevant methods to uncover patterns, relationships, and insights within your data. This step is where the true magic happens, so put on your analytical hat and dig deep!

Step 9: Interpret Your Findings

Analyzing data is just the first step; interpreting the findings is where the real value lies. Look for meaningful patterns, draw connections, and uncover insights that align with your objective. Remember to consider the limitations of your data and acknowledge any potential biases.

Step 10: Communicate Your Results

Last but not least, share your findings with the world! Prepare visualizations, reports, or presentations that effectively communicate your results. Make sure your audience understands the key takeaways and implications of your findings. Remember, knowledge is power, but only if it’s effectively shared.

And voila! You’ve now familiarized yourself with the 10 steps to master data gathering. Whether you’re a data enthusiast or a professional in the field, following these steps will set you on the path to success. So go forth, embrace the data, and uncover the hidden treasures within!

FAQ: What are the 10 Steps in Data Gathering

In the world of data-driven decision-making, gathering accurate and reliable data is crucial. Whether you’re conducting market research, academic studies, or simply exploring a topic of interest, the process of data gathering involves various steps. In this FAQ-style guide, we’ll explore the 10 steps of data gathering that will help you collect and analyze data effectively.

What are the Steps in Data Gathering

Identify your research objective: Before diving into data gathering, it’s essential to define the purpose of your research. Determine what information you need to collect and how it will contribute to your overall goal.

Create a research plan: Develop a detailed plan outlining the methods and strategies you’ll use to gather data. Consider factors such as time constraints, available resources, and potential obstacles.

Choose your data collection method: There are various methods to collect data, including surveys, interviews, observations, and experiments. Select a method or combination of methods that align with your research objective and provide the most accurate and relevant data.

Design your data collection tool: Once you’ve chosen your data collection method, design the tools you’ll use to gather information. This may include developing survey questionnaires, interview guides, or observation protocols.

Collect your data: Now it’s time to put your plan into action and start gathering data. Ensure proper training for data collectors, maintain accurate records, and adhere to ethical guidelines if applicable.

Clean and organize your data: After collecting the data, it’s essential to clean and organize it to ensure accuracy and ease of analysis. Remove any inconsistencies, irrelevant information, or duplicate entries. Use software tools such as spreadsheets or statistical software to manage your data effectively.

Analyze your data: With the cleaned and organized data, begin analyzing it to uncover patterns, trends, and insights. Utilize statistical techniques and visualizations to make sense of your data and draw meaningful conclusions .

Interpret your findings: Once you’ve analyzed the data, interpret the results in the context of your research objective. Look for connections, relationships, and implications that can inform your decision-making process.

Draw conclusions and make recommendations: Based on your analysis and interpretation, draw conclusions about your research question and provide recommendations for further action or future studies.

Communicate your findings: Finally, present your findings in a clear and concise manner. This could be through a research report, presentation, or infographic. Consider the appropriate format for your audience and ensure your communication is engaging and accessible.

Data gathering may seem like a daunting process, but by following these 10 steps, you can navigate it successfully. Remember to stay focused on your research objective, choose the right methods and tools, and analyze your data thoroughly. With proper planning and execution, you’ll gather valuable insights that can inform decision-making and drive meaningful outcomes.

  • data gathering
  • data sources
  • decision-making
  • essential steps
  • impactful results
  • informed decision-making
  • insightful analysis
  • observation checklist
  • research projects
  • right data collection methods

' src=

Willie Wilson

Which actor has a lazy eye, is light pink ok to wear to a wedding, you may also like, what was the main dish in the new world.

  • by Richard Edwards
  • October 18, 2023

Are Pink Snakes Real? Exploring the Colors of the Serpent World

  • by Mr. Gilbert Preston
  • October 23, 2023

Is Jen Landon Leaving Yellowstone?

  • by Brandon Thompson
  • October 11, 2023

Darcey and Stacey: Exploring Their Turkish Adventures in Search of Beauty

  • by Donna Gonzalez
  • October 12, 2023

What American City is Closest to Niagara Falls?

Are vampires cold-blooded exploring the myths and realities.

  • October 29, 2023

Data collection in research: Your complete guide

Last updated

31 January 2023

Reviewed by

Cathy Heath

Short on time? Get an AI generated summary of this article instead

In the late 16th century, Francis Bacon coined the phrase "knowledge is power," which implies that knowledge is a powerful force, like physical strength. In the 21st century, knowledge in the form of data is unquestionably powerful.

But data isn't something you just have - you need to collect it. This means utilizing a data collection process and turning the collected data into knowledge that you can leverage into a successful strategy for your business or organization.

Believe it or not, there's more to data collection than just conducting a Google search. In this complete guide, we shine a spotlight on data collection, outlining what it is, types of data collection methods, common challenges in data collection, data collection techniques, and the steps involved in data collection.

Analyze all your data in one place

Uncover hidden nuggets in all types of qualitative data when you analyze it in Dovetail

  • What is data collection?

There are two specific data collection techniques: primary and secondary data collection. Primary data collection is the process of gathering data directly from sources. It's often considered the most reliable data collection method, as researchers can collect information directly from respondents.

Secondary data collection is data that has already been collected by someone else and is readily available. This data is usually less expensive and quicker to obtain than primary data.

  • What are the different methods of data collection?

There are several data collection methods, which can be either manual or automated. Manual data collection involves collecting data manually, typically with pen and paper, while computerized data collection involves using software to collect data from online sources, such as social media, website data, transaction data, etc. 

Here are the five most popular methods of data collection:

Surveys are a very popular method of data collection that organizations can use to gather information from many people. Researchers can conduct multi-mode surveys that reach respondents in different ways, including in person, by mail, over the phone, or online.

As a method of data collection, surveys have several advantages. For instance, they are relatively quick and easy to administer, you can be flexible in what you ask, and they can be tailored to collect data on various topics or from certain demographics.

However, surveys also have several disadvantages. For instance, they can be expensive to administer, and the results may not represent the population as a whole. Additionally, survey data can be challenging to interpret. It may also be subject to bias if the questions are not well-designed or if the sample of people surveyed is not representative of the population of interest.

Interviews are a common method of collecting data in social science research. You can conduct interviews in person, over the phone, or even via email or online chat.

Interviews are a great way to collect qualitative and quantitative data . Qualitative interviews are likely your best option if you need to collect detailed information about your subjects' experiences or opinions. If you need to collect more generalized data about your subjects' demographics or attitudes, then quantitative interviews may be a better option.

Interviews are relatively quick and very flexible, allowing you to ask follow-up questions and explore topics in more depth. The downside is that interviews can be time-consuming and expensive due to the amount of information to be analyzed. They are also prone to bias, as both the interviewer and the respondent may have certain expectations or preconceptions that may influence the data.

Direct observation

Observation is a direct way of collecting data. It can be structured (with a specific protocol to follow) or unstructured (simply observing without a particular plan).

Organizations and businesses use observation as a data collection method to gather information about their target market, customers, or competition. Businesses can learn about consumer behavior, preferences, and trends by observing people using their products or service.

There are two types of observation: participatory and non-participatory. In participatory observation, the researcher is actively involved in the observed activities. This type of observation is used in ethnographic research , where the researcher wants to understand a group's culture and social norms. Non-participatory observation is when researchers observe from a distance and do not interact with the people or environment they are studying.

There are several advantages to using observation as a data collection method. It can provide insights that may not be apparent through other methods, such as surveys or interviews. Researchers can also observe behavior in a natural setting, which can provide a more accurate picture of what people do and how and why they behave in a certain context.

There are some disadvantages to using observation as a method of data collection. It can be time-consuming, intrusive, and expensive to observe people for extended periods. Observations can also be tainted if the researcher is not careful to avoid personal biases or preconceptions.

Automated data collection

Business applications and websites are increasingly collecting data electronically to improve the user experience or for marketing purposes.

There are a few different ways that organizations can collect data automatically. One way is through cookies, which are small pieces of data stored on a user's computer. They track a user's browsing history and activity on a site, measuring levels of engagement with a business’s products or services, for example.

Another way organizations can collect data automatically is through web beacons. Web beacons are small images embedded on a web page to track a user's activity.

Finally, organizations can also collect data through mobile apps, which can track user location, device information, and app usage. This data can be used to improve the user experience and for marketing purposes.

Automated data collection is a valuable tool for businesses, helping improve the user experience or target marketing efforts. Businesses should aim to be transparent about how they collect and use this data.

Sourcing data through information service providers

Organizations need to be able to collect data from a variety of sources, including social media, weblogs, and sensors. The process to do this and then use the data for action needs to be efficient, targeted, and meaningful.

In the era of big data, organizations are increasingly turning to information service providers (ISPs) and other external data sources to help them collect data to make crucial decisions. 

Information service providers help organizations collect data by offering personalized services that suit the specific needs of the organizations. These services can include data collection, analysis, management, and reporting. By partnering with an ISP, organizations can gain access to the newest technology and tools to help them to gather and manage data more effectively.

There are also several tools and techniques that organizations can use to collect data from external sources, such as web scraping, which collects data from websites, and data mining, which involves using algorithms to extract data from large data sets. 

Organizations can also use APIs (application programming interface) to collect data from external sources. APIs allow organizations to access data stored in another system and share and integrate it into their own systems.

Finally, organizations can also use manual methods to collect data from external sources. This can involve contacting companies or individuals directly to request data, by using the right tools and methods to get the insights they need.

  • What are common challenges in data collection?

There are many challenges that researchers face when collecting data. Here are five common examples:

Big data environments

Data collection can be a challenge in big data environments for several reasons. It can be located in different places, such as archives, libraries, or online. The sheer volume of data can also make it difficult to identify the most relevant data sets.

Second, the complexity of data sets can make it challenging to extract the desired information. Third, the distributed nature of big data environments can make it difficult to collect data promptly and efficiently.

Therefore it is important to have a well-designed data collection strategy to consider the specific needs of the organization and what data sets are the most relevant. Alongside this, consideration should be made regarding the tools and resources available to support data collection and protect it from unintended use.

Data bias is a common challenge in data collection. It occurs when data is collected from a sample that is not representative of the population of interest. 

There are different types of data bias, but some common ones include selection bias, self-selection bias, and response bias. Selection bias can occur when the collected data does not represent the population being studied. For example, if a study only includes data from people who volunteer to participate, that data may not represent the general population.

Self-selection bias can also occur when people self-select into a study, such as by taking part only if they think they will benefit from it. Response bias happens when people respond in a way that is not honest or accurate, such as by only answering questions that make them look good. 

These types of data bias present a challenge because they can lead to inaccurate results and conclusions about behaviors, perceptions, and trends. Data bias can be avoided by identifying potential sources or themes of bias and setting guidelines for eliminating them.

Lack of quality assurance processes

One of the biggest challenges in data collection is the lack of quality assurance processes. This can lead to several problems, including incorrect data, missing data, and inconsistencies between data sets.

Quality assurance is important because there are many data sources, and each source may have different levels of quality or corruption. There are also different ways of collecting data, and data quality may vary depending on the method used. 

There are several ways to improve quality assurance in data collection. These include developing clear and consistent goals and guidelines for data collection, implementing quality control measures, using standardized procedures, and employing data validation techniques. By taking these steps, you can ensure that your data is of adequate quality to inform decision-making.

Limited access to data

Another challenge in data collection is limited access to data. This can be due to several reasons, including privacy concerns, the sensitive nature of the data, security concerns, or simply the fact that data is not readily available.

Legal and compliance regulations

Most countries have regulations governing how data can be collected, used, and stored. In some cases, data collected in one country may not be used in another. This means gaining a global perspective can be a challenge. 

For example, if a company is required to comply with the EU General Data Protection Regulation (GDPR), it may not be able to collect data from individuals in the EU without their explicit consent. This can make it difficult to collect data from a target audience.

Legal and compliance regulations can be complex, and it's important to ensure that all data collected is done so in a way that complies with the relevant regulations.

  • What are the key steps in the data collection process?

There are five steps involved in the data collection process. They are:

1. Decide what data you want to gather

Have a clear understanding of the questions you are asking, and then consider where the answers might lie and how you might obtain them. This saves time and resources by avoiding the collection of irrelevant data, and helps maintain the quality of your datasets. 

2. Establish a deadline for data collection

Establishing a deadline for data collection helps you avoid collecting too much data, which can be costly and time-consuming to analyze. It also allows you to plan for data analysis and prompt interpretation. Finally, it helps you meet your research goals and objectives and allows you to move forward.

3. Select a data collection approach

The data collection approach you choose will depend on different factors, including the type of data you need, available resources, and the project timeline. For instance, if you need qualitative data, you might choose a focus group or interview methodology. If you need quantitative data , then a survey or observational study may be the most appropriate form of collection.

4. Gather information

When collecting data for your business, identify your business goals first. Once you know what you want to achieve, you can start collecting data to reach those goals. The most important thing is to ensure that the data you collect is reliable and valid. Otherwise, any decisions you make using the data could result in a negative outcome for your business.

5. Examine the information and apply your findings

As a researcher, it's important to examine the data you're collecting and analyzing before you apply your findings. This is because data can be misleading, leading to inaccurate conclusions. Ask yourself whether it is what you are expecting? Is it similar to other datasets you have looked at? 

There are many scientific ways to examine data, but some common methods include:

looking at the distribution of data points

examining the relationships between variables

looking for outliers

By taking the time to examine your data and noticing any patterns, strange or otherwise, you can avoid making mistakes that could invalidate your research.

  • How qualitative analysis software streamlines the data collection process

Knowledge derived from data does indeed carry power. However, if you don't convert the knowledge into action, it will remain a resource of unexploited energy and wasted potential.

Luckily, data collection tools enable organizations to streamline their data collection and analysis processes and leverage the derived knowledge to grow their businesses. For instance, qualitative analysis software can be highly advantageous in data collection by streamlining the process, making it more efficient and less time-consuming.

Secondly, qualitative analysis software provides a structure for data collection and analysis, ensuring that data is of high quality. It can also help to uncover patterns and relationships that would otherwise be difficult to discern. Moreover, you can use it to replace more expensive data collection methods, such as focus groups or surveys.

Overall, qualitative analysis software can be valuable for any researcher looking to collect and analyze data. By increasing efficiency, improving data quality, and providing greater insights, qualitative software can help to make the research process much more efficient and effective.

what is the data gathering procedure in research

Learn more about qualitative research data analysis software

Should you be using a customer insights hub.

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Privacy Policy

Research Method

Home » Data Collection – Methods Types and Examples

Data Collection – Methods Types and Examples

Table of Contents

Data collection

Data Collection

Definition:

Data collection is the process of gathering and collecting information from various sources to analyze and make informed decisions based on the data collected. This can involve various methods, such as surveys, interviews, experiments, and observation.

In order for data collection to be effective, it is important to have a clear understanding of what data is needed and what the purpose of the data collection is. This can involve identifying the population or sample being studied, determining the variables to be measured, and selecting appropriate methods for collecting and recording data.

Types of Data Collection

Types of Data Collection are as follows:

Primary Data Collection

Primary data collection is the process of gathering original and firsthand information directly from the source or target population. This type of data collection involves collecting data that has not been previously gathered, recorded, or published. Primary data can be collected through various methods such as surveys, interviews, observations, experiments, and focus groups. The data collected is usually specific to the research question or objective and can provide valuable insights that cannot be obtained from secondary data sources. Primary data collection is often used in market research, social research, and scientific research.

Secondary Data Collection

Secondary data collection is the process of gathering information from existing sources that have already been collected and analyzed by someone else, rather than conducting new research to collect primary data. Secondary data can be collected from various sources, such as published reports, books, journals, newspapers, websites, government publications, and other documents.

Qualitative Data Collection

Qualitative data collection is used to gather non-numerical data such as opinions, experiences, perceptions, and feelings, through techniques such as interviews, focus groups, observations, and document analysis. It seeks to understand the deeper meaning and context of a phenomenon or situation and is often used in social sciences, psychology, and humanities. Qualitative data collection methods allow for a more in-depth and holistic exploration of research questions and can provide rich and nuanced insights into human behavior and experiences.

Quantitative Data Collection

Quantitative data collection is a used to gather numerical data that can be analyzed using statistical methods. This data is typically collected through surveys, experiments, and other structured data collection methods. Quantitative data collection seeks to quantify and measure variables, such as behaviors, attitudes, and opinions, in a systematic and objective way. This data is often used to test hypotheses, identify patterns, and establish correlations between variables. Quantitative data collection methods allow for precise measurement and generalization of findings to a larger population. It is commonly used in fields such as economics, psychology, and natural sciences.

Data Collection Methods

Data Collection Methods are as follows:

Surveys involve asking questions to a sample of individuals or organizations to collect data. Surveys can be conducted in person, over the phone, or online.

Interviews involve a one-on-one conversation between the interviewer and the respondent. Interviews can be structured or unstructured and can be conducted in person or over the phone.

Focus Groups

Focus groups are group discussions that are moderated by a facilitator. Focus groups are used to collect qualitative data on a specific topic.

Observation

Observation involves watching and recording the behavior of people, objects, or events in their natural setting. Observation can be done overtly or covertly, depending on the research question.

Experiments

Experiments involve manipulating one or more variables and observing the effect on another variable. Experiments are commonly used in scientific research.

Case Studies

Case studies involve in-depth analysis of a single individual, organization, or event. Case studies are used to gain detailed information about a specific phenomenon.

Secondary Data Analysis

Secondary data analysis involves using existing data that was collected for another purpose. Secondary data can come from various sources, such as government agencies, academic institutions, or private companies.

How to Collect Data

The following are some steps to consider when collecting data:

  • Define the objective : Before you start collecting data, you need to define the objective of the study. This will help you determine what data you need to collect and how to collect it.
  • Identify the data sources : Identify the sources of data that will help you achieve your objective. These sources can be primary sources, such as surveys, interviews, and observations, or secondary sources, such as books, articles, and databases.
  • Determine the data collection method : Once you have identified the data sources, you need to determine the data collection method. This could be through online surveys, phone interviews, or face-to-face meetings.
  • Develop a data collection plan : Develop a plan that outlines the steps you will take to collect the data. This plan should include the timeline, the tools and equipment needed, and the personnel involved.
  • Test the data collection process: Before you start collecting data, test the data collection process to ensure that it is effective and efficient.
  • Collect the data: Collect the data according to the plan you developed in step 4. Make sure you record the data accurately and consistently.
  • Analyze the data: Once you have collected the data, analyze it to draw conclusions and make recommendations.
  • Report the findings: Report the findings of your data analysis to the relevant stakeholders. This could be in the form of a report, a presentation, or a publication.
  • Monitor and evaluate the data collection process: After the data collection process is complete, monitor and evaluate the process to identify areas for improvement in future data collection efforts.
  • Ensure data quality: Ensure that the collected data is of high quality and free from errors. This can be achieved by validating the data for accuracy, completeness, and consistency.
  • Maintain data security: Ensure that the collected data is secure and protected from unauthorized access or disclosure. This can be achieved by implementing data security protocols and using secure storage and transmission methods.
  • Follow ethical considerations: Follow ethical considerations when collecting data, such as obtaining informed consent from participants, protecting their privacy and confidentiality, and ensuring that the research does not cause harm to participants.
  • Use appropriate data analysis methods : Use appropriate data analysis methods based on the type of data collected and the research objectives. This could include statistical analysis, qualitative analysis, or a combination of both.
  • Record and store data properly: Record and store the collected data properly, in a structured and organized format. This will make it easier to retrieve and use the data in future research or analysis.
  • Collaborate with other stakeholders : Collaborate with other stakeholders, such as colleagues, experts, or community members, to ensure that the data collected is relevant and useful for the intended purpose.

Applications of Data Collection

Data collection methods are widely used in different fields, including social sciences, healthcare, business, education, and more. Here are some examples of how data collection methods are used in different fields:

  • Social sciences : Social scientists often use surveys, questionnaires, and interviews to collect data from individuals or groups. They may also use observation to collect data on social behaviors and interactions. This data is often used to study topics such as human behavior, attitudes, and beliefs.
  • Healthcare : Data collection methods are used in healthcare to monitor patient health and track treatment outcomes. Electronic health records and medical charts are commonly used to collect data on patients’ medical history, diagnoses, and treatments. Researchers may also use clinical trials and surveys to collect data on the effectiveness of different treatments.
  • Business : Businesses use data collection methods to gather information on consumer behavior, market trends, and competitor activity. They may collect data through customer surveys, sales reports, and market research studies. This data is used to inform business decisions, develop marketing strategies, and improve products and services.
  • Education : In education, data collection methods are used to assess student performance and measure the effectiveness of teaching methods. Standardized tests, quizzes, and exams are commonly used to collect data on student learning outcomes. Teachers may also use classroom observation and student feedback to gather data on teaching effectiveness.
  • Agriculture : Farmers use data collection methods to monitor crop growth and health. Sensors and remote sensing technology can be used to collect data on soil moisture, temperature, and nutrient levels. This data is used to optimize crop yields and minimize waste.
  • Environmental sciences : Environmental scientists use data collection methods to monitor air and water quality, track climate patterns, and measure the impact of human activity on the environment. They may use sensors, satellite imagery, and laboratory analysis to collect data on environmental factors.
  • Transportation : Transportation companies use data collection methods to track vehicle performance, optimize routes, and improve safety. GPS systems, on-board sensors, and other tracking technologies are used to collect data on vehicle speed, fuel consumption, and driver behavior.

Examples of Data Collection

Examples of Data Collection are as follows:

  • Traffic Monitoring: Cities collect real-time data on traffic patterns and congestion through sensors on roads and cameras at intersections. This information can be used to optimize traffic flow and improve safety.
  • Social Media Monitoring : Companies can collect real-time data on social media platforms such as Twitter and Facebook to monitor their brand reputation, track customer sentiment, and respond to customer inquiries and complaints in real-time.
  • Weather Monitoring: Weather agencies collect real-time data on temperature, humidity, air pressure, and precipitation through weather stations and satellites. This information is used to provide accurate weather forecasts and warnings.
  • Stock Market Monitoring : Financial institutions collect real-time data on stock prices, trading volumes, and other market indicators to make informed investment decisions and respond to market fluctuations in real-time.
  • Health Monitoring : Medical devices such as wearable fitness trackers and smartwatches can collect real-time data on a person’s heart rate, blood pressure, and other vital signs. This information can be used to monitor health conditions and detect early warning signs of health issues.

Purpose of Data Collection

The purpose of data collection can vary depending on the context and goals of the study, but generally, it serves to:

  • Provide information: Data collection provides information about a particular phenomenon or behavior that can be used to better understand it.
  • Measure progress : Data collection can be used to measure the effectiveness of interventions or programs designed to address a particular issue or problem.
  • Support decision-making : Data collection provides decision-makers with evidence-based information that can be used to inform policies, strategies, and actions.
  • Identify trends : Data collection can help identify trends and patterns over time that may indicate changes in behaviors or outcomes.
  • Monitor and evaluate : Data collection can be used to monitor and evaluate the implementation and impact of policies, programs, and initiatives.

When to use Data Collection

Data collection is used when there is a need to gather information or data on a specific topic or phenomenon. It is typically used in research, evaluation, and monitoring and is important for making informed decisions and improving outcomes.

Data collection is particularly useful in the following scenarios:

  • Research : When conducting research, data collection is used to gather information on variables of interest to answer research questions and test hypotheses.
  • Evaluation : Data collection is used in program evaluation to assess the effectiveness of programs or interventions, and to identify areas for improvement.
  • Monitoring : Data collection is used in monitoring to track progress towards achieving goals or targets, and to identify any areas that require attention.
  • Decision-making: Data collection is used to provide decision-makers with information that can be used to inform policies, strategies, and actions.
  • Quality improvement : Data collection is used in quality improvement efforts to identify areas where improvements can be made and to measure progress towards achieving goals.

Characteristics of Data Collection

Data collection can be characterized by several important characteristics that help to ensure the quality and accuracy of the data gathered. These characteristics include:

  • Validity : Validity refers to the accuracy and relevance of the data collected in relation to the research question or objective.
  • Reliability : Reliability refers to the consistency and stability of the data collection process, ensuring that the results obtained are consistent over time and across different contexts.
  • Objectivity : Objectivity refers to the impartiality of the data collection process, ensuring that the data collected is not influenced by the biases or personal opinions of the data collector.
  • Precision : Precision refers to the degree of accuracy and detail in the data collected, ensuring that the data is specific and accurate enough to answer the research question or objective.
  • Timeliness : Timeliness refers to the efficiency and speed with which the data is collected, ensuring that the data is collected in a timely manner to meet the needs of the research or evaluation.
  • Ethical considerations : Ethical considerations refer to the ethical principles that must be followed when collecting data, such as ensuring confidentiality and obtaining informed consent from participants.

Advantages of Data Collection

There are several advantages of data collection that make it an important process in research, evaluation, and monitoring. These advantages include:

  • Better decision-making : Data collection provides decision-makers with evidence-based information that can be used to inform policies, strategies, and actions, leading to better decision-making.
  • Improved understanding: Data collection helps to improve our understanding of a particular phenomenon or behavior by providing empirical evidence that can be analyzed and interpreted.
  • Evaluation of interventions: Data collection is essential in evaluating the effectiveness of interventions or programs designed to address a particular issue or problem.
  • Identifying trends and patterns: Data collection can help identify trends and patterns over time that may indicate changes in behaviors or outcomes.
  • Increased accountability: Data collection increases accountability by providing evidence that can be used to monitor and evaluate the implementation and impact of policies, programs, and initiatives.
  • Validation of theories: Data collection can be used to test hypotheses and validate theories, leading to a better understanding of the phenomenon being studied.
  • Improved quality: Data collection is used in quality improvement efforts to identify areas where improvements can be made and to measure progress towards achieving goals.

Limitations of Data Collection

While data collection has several advantages, it also has some limitations that must be considered. These limitations include:

  • Bias : Data collection can be influenced by the biases and personal opinions of the data collector, which can lead to inaccurate or misleading results.
  • Sampling bias : Data collection may not be representative of the entire population, resulting in sampling bias and inaccurate results.
  • Cost : Data collection can be expensive and time-consuming, particularly for large-scale studies.
  • Limited scope: Data collection is limited to the variables being measured, which may not capture the entire picture or context of the phenomenon being studied.
  • Ethical considerations : Data collection must follow ethical principles to protect the rights and confidentiality of the participants, which can limit the type of data that can be collected.
  • Data quality issues: Data collection may result in data quality issues such as missing or incomplete data, measurement errors, and inconsistencies.
  • Limited generalizability : Data collection may not be generalizable to other contexts or populations, limiting the generalizability of the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Background of The Study

Background of The Study – Examples and Writing...

Tables in Research Paper

Tables in Research Paper – Types, Creating Guide...

Purpose of Research

Purpose of Research – Objectives and Applications

Assignment

Assignment – Types, Examples and Writing Guide

References in Research

References in Research – Types, Examples and...

Conceptual Framework

Conceptual Framework – Types, Methodology and...

Case Western Reserve University

  • Research Data Lifecycle Guide

Data Collection

Data collection is the process of gathering and measuring information used for research. Collecting data is one of the most important steps in the research process, and is part of all disciplines including physical and social sciences, humanities, business, etc. Data comes in many forms with different ways to store and record data, either written in a lab notebook and or recorded digitally on a computer system. 

While methods may differ across disciplines,  good data management processes begin with accurately and clearly describing the information recorded, the process used to collect the data, practices that ensure the quality of the data, and sharing data to enable reproducibility. This section breaks down different topics that need to be addressed while collecting and managing data for research.

Learn more about what’s required for data collection as a researcher at Case Western Reserve University. 

Ensuring Accurate and Appropriate Data Collection

Accurate data collection is vital to ensure the integrity of research . It is important when planning and executing a research project to consider methods collection and the storage of data to ensure that results can be used for publications and reporting.   The consequences from improper data collection include:

  • inability to answer research questions accurately
  • inability to repeat and validate the study
  • distorted findings resulting in wasted resources
  • misleading other researchers to pursue fruitless avenues of investigation
  • compromising decisions for public policy
  • causing harm to human participants and animal subjects

While the degree of impact from inaccurate data may vary by discipline, there is a potential to cause disproportionate harm when data is misrepresented and misused. This includes fraud or scientific misconduct.

Any data collected in the course of your research should follow RDM best practices to ensure accurate and appropriate data collection. This includes as appropriate, developing data collection protocols and processes to ensure inconsistencies and other errors are caught and corrected in a timely manner.

Examples of Research Data

Research data is any information that has been collected, observed, generated or created in association with research processes and findings.

Much research data is digital in format, but research data can also be extended to include non-digital formats such as laboratory notebook, diaries, or written responses to surveys. Examples may include (but are not limited to):

  • Excel spreadsheets that contains instrument data
  • Documents (text, Word), containing study results
  • Laboratory notebooks, field notebooks, diaries
  • Questionnaires, transcripts, codebooks
  • Audiotapes, videotapes
  • Photographs, films
  • Protein or genetic sequences
  • Test responses
  • Slides, artifacts, specimens, samples
  • Collection of digital objects acquired and generated during the process of research
  • Database contents (video, audio, text, images)
  • Models, algorithms, scripts
  • Contents of an application (input, output, logfiles for analysis software, simulation software, schemas)
  • Source code used in application development

To ensure reproducibility of experiments and results, be sure to include and document information such as: 

  • Methodologies and workflows
  • Standard operating procedures and protocols

Data Use Agreements 

When working with data it is important to understand any restrictions that need to be addressed due to the sensitivity of the data. This includes how you download and share with other collaborators, and how it needs to be properly secured. 

Datasets can include potentially sensitive data that needs to be protected, and not openly shared. In this case, the dataset cannot be shared and or downloaded without permission from CWRU Research Administration and may require an agreement between collaborators and their institutions. All parties will need to abide by the agreement terms including the destruction of data once the collaboration is complete.

Storage Options 

UTech provides cloud and on-premise storage to support the university research mission. This includes Google Drive , Box , Microsoft 365 , and various on-premise solutions for high speed access and mass storage. A listing of supported options can be found on UTech’s website .

In addition to UTech-supported storage solutions, CWRU also maintains an institutional subscription to OSF (Open Science Framework) . OSF is a cloud-based data storage, sharing, and project collaboration platform that connects to many other cloud services like Drive, Box, and Github to amplify your research and data visibility and discoverability. OSF storage is functionally unlimited.

When selecting a storage platform it is important to understand how you plan to analyze and store your data. Cloud storage provides the ability to store and share data effortlessly and provides capabilities such as revisioning and other means to protect your data. On-premise storage is useful when you have large storage demands and require a high speed connection to instruments that generate data and systems that process data. Both types of storage have their advantages and disadvantages that you should consider when planning your research project.

Data Security

Data security is a set of processes and ongoing practices designed to protect information and the systems used to store and process data. This includes computer systems, files, databases, applications, user accounts, networks, and services on institutional premises, in the cloud, and remotely at the location of individual researchers. 

Effective data security takes into account the confidentiality, integrity, and availability of the information and its use. This is especially important when data contains personally identifiable information, intellectual property, trade secrets, and or technical data supporting technology transfer agreements (before public disclosure decisions have been made).

Data Categorization 

CWRU uses a 3-tier system to categorize research data based on information types and sensitivity . Determination is based upon risk to the University in the areas of confidentiality, integrity, and availability of data in support of the University's research mission. In this context, confidentiality measures to what extent information can be disclosed to others, integrity is the assurance that the information is trustworthy and accurate, and availability is a guarantee of reliable access to the information by authorized users.

Information (or data) owners are responsible for determining the impact levels of their information, i.e. what happens if the data is improperly accessed or lost accidentally, implementing the necessary security controls, and managing the risk of negative events including data loss and unauthorized access.

Classification

Examples

Loss, corruption, or inappropriate access to information can interfere with CWRU's mission, interrupt business and damage reputations or finances. 

Securing Data

The classification of data requires certain safeguards or countermeasures, known as controls, to be applied to systems that store data. This can include restricting access to the data, detecting unauthorized access, preventative measures to avoid loss of data, encrypting the transfer and storage of data, keeping the system and data in a secure location, and receiving training on best practices for handling data. Controls are classified according to their characteristics, for example:

  • Physical controls e.g. doors, locks, climate control, and fire extinguishers;
  • Procedural or administrative controls e.g. policies, incident response processes, management oversight, security awareness and training;
  • Technical or logical controls e.g. user authentication (login) and logical access controls, antivirus software, firewalls;
  • Legal and regulatory or compliance controls e.g. privacy laws, policies and clauses.

Principal Investigator (PI) Responsibilities

The CWRU Faculty Handbook provides guidelines for PIs regarding the custody of research data. This includes, where applicable, appropriate measures to protect confidential information. It is everyone’s responsibility to ensure that our research data is kept securely and available for reproducibility and future research opportunities.

University Technology provides many services and resources related to data security including assistance with planning and securing data. This includes processing and storing restricted information used in research. 

Data Collected as Part of Human Subject Research 

To ensure the privacy and safety of the individual participating in a human subject research study, additional rules and processes are in place that describe how one can use and disclose data collected,  The Office of Research Administration provides information relevant to conducting this type of research. This includes:

  • Guidance on data use agreements and processes for agreements that involve human-related data or human-derived samples coming in or going out of CWRU.
  • Compliance with human subject research rules and regulations.

According to 45 CFR 46 , a human subject is "a living individual about whom an investigator (whether professional or student) conducting research:

  • Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or
  • Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens."

The CWRU Institutional Review Board reviews social science/behavioral studies, and low-risk biomedical research not conducted in a hospital setting for all faculty, staff, and students of the University. This includes data collected and used for human subjects research. 

Research conducted in a hospital setting including University Hospitals requires IRB protocol approval.

Questions regarding the management of human subject research data should be addressed to the CWRU Institutional Review Board .

Getting Help With Data Collection

If you are looking for datasets and other resources for your research you can contact your subject area librarian for assistance.

  • Kelvin Smith Library

If you need assistance with administrative items such as data use agreements or finding the appropriate storage solution please contact the following offices.

  • Research Administration
  • UTech Research Computing
  • Information Security Office

Guidance and Resources

  • Information Security Policy
  • Research Data Protection
  • CWRU Faculty Handbook
  • CWRU IRB Guidance

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Grad Med Educ
  • v.8(2); 2016 May

Design: Selection of Data Collection Methods

Associated data.

Editor's Note: The online version of this article contains resources for further reading and a table of strengths and limitations of qualitative data collection methods.

The Challenge

Imagine that residents in your program have been less than complimentary about interprofessional rounds (IPRs). The program director asks you to determine what residents are learning about in collaboration with other health professionals during IPRs. If you construct a survey asking Likert-type questions such as “How much are you learning?” you likely will not gather the information you need to answer this question. You understand that qualitative data deal with words rather than numbers and could provide the needed answers. How do you collect “good” words? Should you use open-ended questions in a survey format? Should you conduct interviews, focus groups, or conduct direct observation? What should you consider when making these decisions?

Introduction

Qualitative research is often employed when there is a problem and no clear solutions exist, as in the case above that elicits the following questions: Why are residents complaining about rounds? How could we make rounds better? In this context, collecting “good” information or words (qualitative data) is intended to produce information that helps you to answer your research questions, capture the phenomenon of interest, and account for context and the rich texture of the human experience. You may also aim to challenge previous thinking and invite further inquiry.

Coherence or alignment between all aspects of the research project is essential. In this Rip Out we focus on data collection, but in qualitative research, the entire project must be considered. 1 , 2 Careful design of the data collection phase requires the following: deciding who will do what, where, when, and how at the different stages of the research process; acknowledging the role of the researcher as an instrument of data collection; and carefully considering the context studied and the participants and informants involved in the research.

Types of Data Collection Methods

Data collection methods are important, because how the information collected is used and what explanations it can generate are determined by the methodology and analytical approach applied by the researcher. 1 , 2 Five key data collection methods are presented here, with their strengths and limitations described in the online supplemental material.

  • 1 Questions added to surveys to obtain qualitative data typically are open-ended with a free-text format. Surveys are ideal for documenting perceptions, attitudes, beliefs, or knowledge within a clear, predetermined sample of individuals. “Good” open-ended questions should be specific enough to yield coherent responses across respondents, yet broad enough to invite a spectrum of answers. Examples for this scenario include: What is the function of IPRs? What is the educational value of IPRs, according to residents? Qualitative survey data can be analyzed using a range of techniques.
  • 2 Interviews are used to gather information from individuals 1-on-1, using a series of predetermined questions or a set of interest areas. Interviews are often recorded and transcribed. They can be structured or unstructured; they can either follow a tightly written script that mimics a survey or be inspired by a loose set of questions that invite interviewees to express themselves more freely. Interviewers need to actively listen and question, probe, and prompt further to collect richer data. Interviews are ideal when used to document participants' accounts, perceptions of, or stories about attitudes toward and responses to certain situations or phenomena. Interview data are often used to generate themes , theories , and models . Many research questions that can be answered with surveys can also be answered through interviews, but interviews will generally yield richer, more in-depth data than surveys. Interviews do, however, require more time and resources to conduct and analyze. Importantly, because interviewers are the instruments of data collection, interviewers should be trained to collect comparable data. The number of interviews required depends on the research question and the overarching methodology used. Examples of these questions include: How do residents experience IPRs? What do residents' stories about IPRs tell us about interprofessional care hierarchies?
  • 3 Focus groups are used to gather information in a group setting, either through predetermined interview questions that the moderator asks of participants in turn or through a script to stimulate group conversations. Ideally, they are used when the sum of a group of people's experiences may offer more than a single individual's experiences in understanding social phenomena. Focus groups also allow researchers to capture participants' reactions to the comments and perspectives shared by other participants, and are thus a way to capture similarities and differences in viewpoints. The number of focus groups required will vary based on the questions asked and the number of different stakeholders involved, such as residents, nurses, social workers, pharmacists, and patients. The optimal number of participants per focus group, to generate rich discussion while enabling all members to speak, is 8 to 10 people. 3 Examples of questions include: How would residents, nurses, and pharmacists redesign or improve IPRs to maximize engagement, participation, and use of time? How do suggestions compare across professional groups?
  • 4 Observations are used to gather information in situ using the senses: vision, hearing, touch, and smell. Observations allow us to investigate and document what people do —their everyday behavior—and to try to understand why they do it, rather than focus on their own perceptions or recollections. Observations are ideal when used to document, explore, and understand, as they occur, activities, actions, relationships, culture, or taken-for-granted ways of doing things. As with the previous methods, the number of observations required will depend on the research question and overarching research approach used. Examples of research questions include: How do residents use their time during IPRs? How do they relate to other health care providers? What kind of language and body language are used to describe patients and their families during IPRs?
  • 5 Textual or content analysis is ideal when used to investigate changes in official, institutional, or organizational views on a specific topic or area to document the context of certain practices or to investigate the experiences and perspectives of a group of individuals who have, for example, engaged in written reflection. Textual analysis can be used as the main method in a research project or to contextualize findings from another method. The choice and number of documents has to be guided by the research question, but can include newspaper or research articles, governmental reports, organization policies and protocols, letters, records, films, photographs, art, meeting notes, or checklists. The development of a coding grid or scheme for analysis will be guided by the research question and will be iteratively applied to selected documents. Examples of research questions include: How do our local policies and protocols for IPRs reflect or contrast with the broader discourses of interprofessional collaboration? What are the perceived successful features of IPRs in the literature? What are the key features of residents' reflections on their interprofessional experiences during IPRs?

How You Can Start TODAY

  • • Review medical education journals to find qualitative research in your area of interest and focus on the methods used as well as the findings.
  • • When you have chosen a method, read several different sources on it.
  • • From your readings, identify potential colleagues with expertise in your choice of qualitative method as well as others in your discipline who would like to learn more and organize potential working groups to discuss challenges that arise in your work.

What You Can Do LONG TERM

  • • Either locally or nationally, build a community of like-minded scholars to expand your qualitative expertise.
  • • Use a range of methods to develop a broad program of qualitative research.

Supplementary Material

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Data Collection Methods | Step-by-Step Guide & Examples

Data Collection Methods | Step-by-Step Guide & Examples

Published on 4 May 2022 by Pritha Bhandari .

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental, or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem .

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The  aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

Table of contents

Step 1: define the aim of your research, step 2: choose your data collection method, step 3: plan your data collection procedures, step 4: collect the data, frequently asked questions about data collection.

Before you start the process of data collection, you need to identify exactly what you want to achieve. You can start by writing a problem statement : what is the practical or scientific issue that you want to address, and why does it matter?

Next, formulate one or more research questions that precisely define what you want to find out. Depending on your research questions, you might need to collect quantitative or qualitative data :

  • Quantitative data is expressed in numbers and graphs and is analysed through statistical methods .
  • Qualitative data is expressed in words and analysed through interpretations and categorisations.

If your aim is to test a hypothesis , measure something precisely, or gain large-scale statistical insights, collect quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights into a specific context, collect qualitative data.

If you have several aims, you can use a mixed methods approach that collects both types of data.

  • Your first aim is to assess whether there are significant differences in perceptions of managers across different departments and office locations.
  • Your second aim is to gather meaningful feedback from employees to explore new ideas for how managers can improve.

Prevent plagiarism, run a free check.

Based on the data you want to collect, decide which method is best suited for your research.

  • Experimental research is primarily a quantitative method.
  • Interviews , focus groups , and ethnographies are qualitative methods.
  • Surveys , observations, archival research, and secondary data collection can be quantitative or qualitative methods.

Carefully consider what method you will use to gather data that helps you directly answer your research questions.

Data collection methods
Method When to use How to collect data
Experiment To test a causal relationship. Manipulate variables and measure their effects on others.
Survey To understand the general characteristics or opinions of a group of people. Distribute a list of questions to a sample online, in person, or over the phone.
Interview/focus group To gain an in-depth understanding of perceptions or opinions on a topic. Verbally ask participants open-ended questions in individual interviews or focus group discussions.
Observation To understand something in its natural setting. Measure or survey a sample without trying to affect them.
Ethnography To study the culture of a community or organisation first-hand. Join and participate in a community and record your observations and reflections.
Archival research To understand current or historical events, conditions, or practices. Access manuscripts, documents, or records from libraries, depositories, or the internet.
Secondary data collection To analyse data from populations that you can’t access first-hand. Find existing datasets that have already been collected, from sources such as government agencies or research organisations.

When you know which method(s) you are using, you need to plan exactly how you will implement them. What procedures will you follow to make accurate observations or measurements of the variables you are interested in?

For instance, if you’re conducting surveys or interviews, decide what form the questions will take; if you’re conducting an experiment, make decisions about your experimental design .

Operationalisation

Sometimes your variables can be measured directly: for example, you can collect data on the average age of employees simply by asking for dates of birth. However, often you’ll be interested in collecting data on more abstract concepts or variables that can’t be directly observed.

Operationalisation means turning abstract conceptual ideas into measurable observations. When planning how you will collect data, you need to translate the conceptual definition of what you want to study into the operational definition of what you will actually measure.

  • You ask managers to rate their own leadership skills on 5-point scales assessing the ability to delegate, decisiveness, and dependability.
  • You ask their direct employees to provide anonymous feedback on the managers regarding the same topics.

You may need to develop a sampling plan to obtain data systematically. This involves defining a population , the group you want to draw conclusions about, and a sample, the group you will actually collect data from.

Your sampling method will determine how you recruit participants or obtain measurements for your study. To decide on a sampling method you will need to consider factors like the required sample size, accessibility of the sample, and time frame of the data collection.

Standardising procedures

If multiple researchers are involved, write a detailed manual to standardise data collection procedures in your study.

This means laying out specific step-by-step instructions so that everyone in your research team collects data in a consistent way – for example, by conducting experiments under the same conditions and using objective criteria to record and categorise observations.

This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.

Creating a data management plan

Before beginning data collection, you should also decide how you will organise and store your data.

  • If you are collecting data from people, you will likely need to anonymise and safeguard the data to prevent leaks of sensitive information (e.g. names or identity numbers).
  • If you are collecting data via interviews or pencil-and-paper formats, you will need to perform transcriptions or data entry in systematic ways to minimise distortion.
  • You can prevent loss of data by having an organisation system that is routinely backed up.

Finally, you can implement your chosen methods to measure or observe the variables you are interested in.

The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1 to 5. The data produced is numerical and can be statistically analysed for averages and patterns.

To ensure that high-quality data is recorded in a systematic way, here are some best practices:

  • Record all relevant information as and when you obtain data. For example, note down whether or how lab equipment is recalibrated during an experimental study.
  • Double-check manual data entry for errors.
  • If you collect quantitative data, you can assess the reliability and validity to get an indication of your data quality.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 04). Data Collection Methods | Step-by-Step Guide & Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/data-collection-guide/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs quantitative research | examples & methods, triangulation in research | guide, types, examples, what is a conceptual framework | tips & examples.

What Data Gathering Strategies Should I Use?

  • First Online: 28 June 2019

Cite this chapter

what is the data gathering procedure in research

  • Ray Cooksey 3 &
  • Gael McDonald 4  

2069 Accesses

1 Citations

In this chapter, we review many of the data gathering strategies that can be used by postgraduates in social and behavioural research. We explore three major domains of data gathering strategies: strategies for connecting with people (encompassing interaction-based and observation-based strategies), exploring people’s handiworks (encompassing participant-centred and artefact-based strategies) and structuring people’s experiences (encompassing data-shaping and experience-focused strategies). In light of our pluralist perspective, we consider each data gathering strategy, not only as a distinct and self-contained strategy (which may encompass a range of more specific data gathering approaches), but also as part of a larger more interconnected and dynamic toolkit. Our goal is to highlight some key considerations and issues associated with each strategy that might be relevant to your decision making about which might be appropriate for you to use as part of your research journey, given your research frame, pattern(s) of guiding assumptions, contextualisations, positionings, research questions/hypotheses, scoping and shaping considerations and MU configuration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Anderson, L. (2006). Analytic autoethnography. Journal of Contemporary Ethnography, 5 (4), 373–395.

Article   Google Scholar  

Anderson, V. (1997). Systems thinking basics: From concepts to causal loops . Cambridge, MA: Pegasus Communications.

Google Scholar  

Angrosino, M. (2007). Doing ethnographic and observational research . London: Sage Publications.

Book   Google Scholar  

Athanasou, J. A. (1997). Introduction to educational testing . Wentworth Falls, NSW: Social Science Press.

Axelrod, R. (2007). Simulation in the social sciences. In J.-P. Rennard (Ed.), Handbook of research on nature inspired computing for economy and management (pp. 90–100). Hershey, PA: Idea Group Reference.

Babbie, E. (2011). The basics of social research (5th ed.). Belmont, CA: Wadsworth Cengage Learning.

Banks, M. (2007). Using visual data in qualitative research . London: Sage Publications.

Barbour, R. (2007). Doing focus groups . London: Sage Publications.

Beach, D., & Pedersen, R. B. (2013). Process tracing methods: Foundations and guidelines . Ann Arbor, MI: University of Michigan Press.

Bechtel, R. B. (1967). Hodometer research in architecture. Milieu, 2 , 1–9.

Bennett, A., & Checkel, J. T. (Eds.). (2014). Process tracing: From metaphor to analytic tool . Cambridge, UK: Cambridge University Press.

Bickel, R. (2007). Multilevel analysis for applied research . New York: The Guilford Press.

Boddy, C. (2005). Projective techniques in market research: Valueless subjectivity or insightful reality? A look at the evidence for the usefulness, reliability and validity of projective techniques in market research. International Journal of Market Research, 47 (3), 239–254.

Boje, D. M. (1991). The storytelling organization: A study of story performance in an office-supply firm. Administrative Science Quarterly, 36 (1), 106–126.

Bond, D., & Ramsey, E. (2007). Going beyond the fence: Using projective techniques as survey tools to meet the challenges of bounded rationality. In Proceedings of the 2007 Association for Survey Computing Meeting, The Challenges of a Changing World: Developments in the Survey Process , Southampton, UK (pp. 259–270).

Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (3rd ed.). New York: Routledge.

Boslaugh, S. (2007). Secondary data sources for public health: A practical guide . New York: Cambridge University Press.

Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9 (2), 27–40.

Bowerman, B. L., O’Connell, R. T., & Koehler, A. B. (2005). Forecasting, time series, and regression: An applied approach (4th ed.). Belmont, CA: Brooks/Cole.

Boyle, G. J., Saklofske, D. H., & Matthews, G. (Eds.). (2015). Measures of personality and social psychological constructs . Amsterdam: Academic Press.

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15 (5), 662–679.

Britten, N., Campbell, R., Pope, C., Donovan, J., Morgan, M., & Pill, R. (2002). Using meta ethnography to synthesise qualitative research: A worked example. Journal of Health Services Research & Policy, 7 (4), 209–215.

Brown, R. V. (2005). Rational choice and judgment: Decision analysis for the decider . Hoboken, NJ: Wiley.

Bryant, A., & Charmaz, K. (Eds.). (2007). The Sage handbook of grounded theory . Los Angeles: Sage Publications.

Bryman, A., & Bell, E. (2015). Business research methods (4th ed.). New York: Oxford University Press.

Bryman, A., & Cramer, D. (2004). Constructing variables. In M. Hardy & A. Bryman (Eds.), Handbook of data analysis (pp. 17–34). London: Sage Publications.

Buzan, T. (2003). The mind map book (Rev ed.). London: BBC Books.

Buzan, T. (2018). Mind map mastery . London: Watkins.

Byrne, B. M. (2010). Structural equation modelling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York: Routledge.

Campbell, R., Pound, P., Morgan, M., Daker-White, G., Britten, N., Pill, R., et al. (2011). Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research. Health Technology Assessment, 15 (43).

Canter, D., Brown, J., & Groat, L. (1985). A multiple sorting procedure for studying conceptual systems. In M. Brenner, J. Brown, & D. Canter (Eds.), The research interview: Uses and approaches (pp. 79–114). London: Academic Press.

Carmichael, G. A. (2016). Fundamentals of demographic analysis: Concepts, measures and methods . Switzerland: Springer.

Carsey, T. M., & Harden, J. J. (2013). Monte Carlo simulation and resampling methods for the social sciences . Los Angeles: Sage Publications.

Cassell, C., & Walsh, S. (2004). Repertory grids. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 61–72). London: Sage Publications.

Chapter   Google Scholar  

Catterall, M., & Ibbotson, P. (2000). Using projective techniques in education research. British Educational Research Journal, 26 (2), 245–256.

Charmaz, K. (2014). Constructing grounded theory (2nd ed.). London: Sage Publications.

Chang, H. (2016). Autoethnography as method . London: Routledge.

Chase, S. E. (2005). Narrative inquiry: Multiple lenses, approaches, voices. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (3rd ed., pp. 651–679). Thousand Oaks, CA: Sage Publications.

Checkel, J. T. (2008). Process Tracing. In A. Klotz & D. Prakash (Eds.), Qualitative methods in international relations (Research Methods Series) (pp. 114–127). London: Palgrave Macmillan.

Checkland, P., & Poulter, J. (2006). Learning for action: A short definitive account of soft systems methodology and its use for practitioners, teachers and students . Chichester, UK: Wiley.

Chell, E. (2004). Critical incident technique. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 45–60). London: Sage Publications.

Chi, M. T. (2006). Laboratory methods for assessing experts’ and novices’ knowledge. In A. Ericsson, N. Charness, P. Feltovich, & R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 167–184). Cambridge. MA: Cambridge University Press.

Chilisa, B. (2012). Indigenous research methodologies . Los Angeles: Sage Publications.

Chilisa, B., & Tsheko, G. N. (2014). Mixed methods in indigenous research: Building relationships for sustainable intervention outcomes. Journal of Mixed Methods Research, 8 (3), 222–233.

Chrzanowska, J. (2002). Interviewing groups and individuals in qualitative marketing research . London: Sage Publications.

Clandinin, D. J. (Ed.). (2007). Handbook of narrative inquiry: Mapping a methodology . Thousand Oaks, CA: Sage Publications.

Cochran, W. G., & Cox, G. M. (1957). Experimental designs (2nd ed.). New York: John Wiley & Sons.

Cohen, I. G., Amarasingham, R., Shah, A., Xie, B., & Lo, B. (2014). The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs, 33 (7), 1139–1147.

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). New York: Routledge.

Collis, J., & Hussey, R. (2009). Business research: A practical guide for undergraduate and postgraduate students . London: Palgrave Macmillan.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings . Boston: Houghton Mifflin.

Cook, T. D., Campbell, D. T., & Peracchio, L. (1990). Quasi-experimentation. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 4, pp. 491–576). Palo Alto, CA: Consulting Psychologists Press Inc.

Cooksey, R. W. (1996). Judgment analysis: Theory, methods, and applications . San Diego, CA: Academic Press.

Cooksey, R. W. (2000). Mapping the texture of managerial decision making: A complex dynamic decision perspective. Emergence: A Journal of Complexity Issues in Organizations and Management, 2 (2), 102–122.

Cooksey, R. W. (2014). Illustrating statistical procedures: Finding meaning in quantitative data (2nd ed.). Prahran, VIC: Tilde University Press.

Cooksey, R. W., Freebody, P., & Wyatt-Smith, C. (2007). Assessment as judgment-in-context: Analyzing how teachers evaluate students’ writing. Educational Research and Evaluation, 13 (5), 401–434.

Cooksey, R. W., & Loomis, R. J. (1979). Visitor locomotor exploration of a museum gallery . Paper presented at the 50th Annual Meeting of the Rocky Mountain Psychological Association, Las Vegas, NV.

Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.). (2009). The handbook of research synthesis and meta-analysis . New York: Russell Sage Foundation.

Corti, L., Thompson, P., & Fink, J. (2004). Preserving, sharing and reusing data from qualitative research: Methods and strategies. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 288–300). London: Sage Publications.

Cowton, C. J. (1998). The use of secondary data in business ethics research. Journal of Business Ethics, 17, 423–434.

Czarniawska, B. (2004). Narratives in social science research . Thousand Oaks, CA: Sage Publications.

DeVellis, R. F. (2016). Scale development: Theory and applications . Los Angeles: Sage Publications.

Dick, P. (2004). Discourse analysis. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 203–213). London: Sage Publications.

Duncan, M. (2004). Autoethnography: Critical appreciation of an emerging art. International Journal of Qualitative Methods, 3 (4), 28–39.

Durlak, J. A. (1995). Understanding meta-analysis. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistic (pp. 319–352). Washington, DC: American Psychological Association.

Eden, C., & Ackermann, F. (2002). A mapping framework for strategy making. In A. Huff & M. Jenkins (Eds.), Mapping strategic knowledge (pp. 173–195). London: Sage Publications.

Edmunds, H. (1999). The focus group research handbook . Lincolnwood, IL: NTC Business Books.

Elliott, J. (2005). Using narrative in social research: Qualitative and quantitative approaches . London: Sage Publications.

Elliott, H. (1997). The use of diaries in sociological research on health experience. Sociological Research Online , 2 (2). Retrieved August 11, 2018, form http://www.socresonline.org.uk/2/2/7.html .

Emerson, R. M., Fretz, R. I., & Shaw, L. L. (2011). Writing ethnographic fieldnotes (2nd ed.). Chicago: University of Chicago Press.

Enders, W. (2014). Applied econometric time series (4th ed.). New York: John Wiley & Sons.

Eppler, M. J. (2006). A comparison between concept maps, mind maps, conceptual diagrams, and visual metaphors as complementary tools for knowledge construction and sharing. Information Visualization, 5, 202–210.

Ericsson, K. A. (2003). Valid and non-reactive verbalization of thoughts during performance of tasks: Towards a solution to the central problems of introspection as a source of scientific data. In A. Jack & A. Roepstorff (Eds.), Trusting the subject: The use of introspective evidence in cognitive science (Vol. 1, pp. 1–18). Exeter, UK: Imprint Academic.

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: verbal reports as data (Rev ed.). Cambridge, MA: The MIT Press.

Evans, W. (2015). Test wiseness: An examination of cue-using strategies. The Journal of Experimental Education, 52 (3), 141–144.

Farrington, D. P., & Knight, B. J. (1979). Two non-reactive field experiments on stealing from a ‘lost’ letter. British Journal of Social and Clinical Psychology, 18 (3), 277–284.

Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI magazine, 17 (3), 37.

Fielding, N. G., Lee, R. M., & Blank, G. (Eds.). (2008). The Sage handbook of online research methods . Los Angeles: Sage Publications.

Finch, H., & Lewis, J. (2003). Focus groups. In J. Ritchie & J. Lewis (Eds.), Qualitative research practice (pp. 170–198). Los Angeles: Sage Publications.

Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51 (4), 327–358.

Flick, U. (2014). An introduction to qualitative research (5th ed.). Los Angeles: Sage Publications.

Fogarty, G. (2008). Principles and applications of education and psychological testing. In J. A. Athanasou (Ed.), Adult educational psychology (pp. 351–384). Rotterdam, Netherlands: Sense Publishers.

Fothergill, S., Loft, S., & Neal, A. (2009). ATC-lab advanced: An air traffic control simulator with realism and control. Behavior Research Methods & Instrumentation, 41 (1), 118–127.

Frazer, L., & Lawley, M. (2000). Questionnaire design & administration . Milton, QLD: Wiley.

Frumkin, N. (2015). Guide to economic indicators (4th ed.). London: Routledge.

Gabriel, Y., & Griffiths, D. S. (2004). Stories in organizational research. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 114–126). London: Sage Publications.

Galletta, A. (2013). Mastering the semi-structured interview and beyond: From research design to analysis and publication . New York: New York University Press.

Gamst, G. C., Liang, C. T. H., & Der-Karabetian, A. (2011). Handbook of multicultural measures . Thousand Oaks, CA: Sage Publications.

Gandomi, A., & Haider, M. (2015). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 35 (2), 137–144.

Gilbert, N. (2008). Agent-based models . Thousand Oaks, CA: Sage Publications.

Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the social scientist (2nd ed.). New York: McGraw-Hill International.

Gillham, B. (2005). Research interviewing: The range of techniques . Berkshire, UK: Open University Press.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research . Beverly Hills, CA: Sage Publications.

Glass, G. V., Willson, V. L., & Gottman, J. M. (2008). Design and analysis of time-series experiments . Charlotte, NC: Information Age Publishing.

Goodwin, J. (Ed.). (2012a). Sage secondary data analysis: Volume 1: Using secondary sources and secondary analysis . London: Sage Publications.

Goodwin, J. (Ed.). (2012b). Sage secondary data analysis: Volume 2: Quantitative approaches to secondary analysis . London: Sage Publications.

Goodwin, J. (Ed.). (2012c). Sage secondary data analysis: Volume 3: Qualitative data and research in secondary analysis . London: Sage Publications.

Goodwin, J. (Ed.). (2012d). Sage secondary data analysis: Volume 4: Ethical, methodological and practical issues in secondary analysis . London: Sage Publications.

Graco, W. J. (2001). Research into identification and classification of patterns of non-compliance in data using a doctor-shopper sample . Unpublished PhD thesis, School of Marketing and Management, University of New England, Armidale, NSW, Australia.

Gray, D. E. (2014). Doing research in the real world (3rd ed.). Los Angeles: Sage Publications.

Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political analysis, 21 (3), 267–297.

Guest, G., MacQueen, K. M., & Namey, E. E. (2012). Applied thematic analysis . Los Angeles: Sage Publications.

Guillemin, M. (2004,). Understanding illness: Using drawings as a research method. Qualitative Health Research , 14 (2), 272–289.

Gupta, V., & Lehal, G. S. (2009). A survey of text mining techniques and applications. Journal of Emerging Technologies in Web Intelligence, 1 (1), 60–76.

Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2014). A primer on partial least squares structural equation modelling (SEM) . Los Angeles: Sage Publications.

Han, J., Kamber, M., & Pei, J. (2012). Data mining: Concepts and techniques (3rd ed.). Waltham, MA: Morgan Kaufmann Publishers.

Hand, D. J. (2004). Measurement theory and practice: The world through quantification . New York: John Wiley & Sons.

Hancock, G. R. (2004). Experimental, quasi-experimental and nonexperimental design with latent variables. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 317–334). Thousand Oaks, CA: Sage Publications.

Haladyna, T. M. (2004). Developing and validating multiple-choice test items (3rd ed.). London: Routledge.

Hamm, R. M. (1988). Moment-by-moment variation in experts’ analytic and intuitive cognitive activity. IEEE Transactions on Systems, Man, and Cybernetics, 18 (5), 757–776.

Hammond, K. R., Frederick, E., Robillard, N., & Victor, D. (1989). Application of cognitive theory to the student-teacher dialogue. In D. A. Evans & V. L. Patel (Eds.), Cognitive science in medicine. Biomedical modeling . Cambridge, MA: M1T Press (pp. 173–210).

Hammond, K. R., & Stewart, T. R. (2001). The essential Brunswik: Beginnings, explications, applications . New York: Oxford University Press.

Hammond, K. R., & Wascoe, N. E. (1980). Realizations of Brunswik’s representative design . San Francisco: Jossey-Bass.

Harper, D. (2005). What’s new visually? In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (3rd ed., pp. 747–763). Thousand Oaks, CA: Sage Publications.

Heritage, J. (1984). Garfinkel and ethnomethodology . Cambridge, UK: Polity Press.

Hinds, P. S., Vogel, R. J., & Clarke-Steffen, L. (1997). The possibilities and pitfalls of doing a secondary analysis of a qualitative data set. Qualitative Health Research, 7 (3), 408–424.

Hine, D. W., Montiel, C. J., Cooksey, R. W., & Lewko, J. (2005). Mental models of poverty in developing nations: A Canada-Philippines contrast. Journal of Cross-Cultural Psychology, 36 (3), 283–303.

Hine, D. W., Gifford, R., Heath, Y., Cooksey, R., & Quain, P. (2009). A cue utilization approach for investigating harvest decisions in commons dilemmas. Journal of Applied Social Psychology, 39 (3), 564–588.

Hodson, R. (1999). Analyzing documentary accounts . Thousand Oaks, CA: Sage Publications.

Hofferth, S. L. (2005). Secondary data analysis in family research. Journal of Marriage and Family, 67 (4), 891–907.

Hughes, J. & Goodwin, J. (Eds). (2014a). Documentary & archival research: Volume 1: Human documents – Perspectives and approaches . London: Sage Publications.

Hughes, J. & Goodwin, J. (Eds). (2014b). Documentary & archival research: Volume 3: Human documents in social research . London: Sage Publications.

Hughes, J. & Goodwin, J. (Eds). (2014c). Documentary & archival research: Volume 4: Archival research . London: Sage Publications.

Hyland, M. (1981). Introduction to theoretical psychology . London: Macmillan Press.

Irwin, S. (2013). Qualitative secondary data analysis: Ethics, epistemology and context. Progress in Development Studies, 13 (4), 295–306.

Jasper, J. D., & Shapiro, J. (2002). MouseTrace: A better mousetrap for catching decision processes. Behavior Research Methods, Instruments, & Computers, 34 (3), 364–374.

Johnson, B. (2001). Towards a new classification of nonexperimental quantitative research. Educational Researcher, 30 (2), 3–13.

Jones, S. H. (2005). Autoethnography: Making the personal political. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (3rd ed., pp. 763–791). Thousand Oaks, CA: Sage Publications.

Jorgensen, D. (1989). Participant observation: A methodology for human studies . Newbury Park, CA: Sage Publications.

Kane, M., & Trochim, W. M. K. (2007). Concept mapping for planning and evaluation . Thousand Oaks, CA: Sage Publications.

Kamberelis, G., Dimitriadis, G., & Welker, A. (2018). Focus group research and/in figured worlds. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (5th ed., pp. 692–716). Los Angeles: Sage Publications.

Kaptchuk, T. J. (2001). The double-blind, randomized, placebo-controlled trial: Gold standard or golden calf? Journal of Clinical Epidemiology, 54 (6), 541–549.

Keeves, J. P. (1997). Educational research, methodology, and measurement: An international handbook (2nd ed.). Oxford, UK: Pergamon.

Kellehear, A. (1993). The unobtrusive observer: A guide to methods . St Leonards, NSW: Allen & Unwin.

Keppel, G., & Wickens, T. D. (2004). Design and analysis: A researcher’s handbook (4th ed.). Upper Saddle River, NJ: Prentice Hall.

Ker, J. S., Hesketh, E. A., Anderson, F., & Johnston, D. A. (2006). Can a ward simulation exercise achieve the realism that reflects the complexity of everyday practice junior doctors encounter? Medical Teacher, 28 (4), 330–334.

King, N. (2004). Using templates in the thematic analysis of data. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 256–270). London: Sage Publications.

Kirk, R. E. (2013). Experimental design: Procedures for behavioral sciences (4th ed.). Thousand Oaks, CA: Sage Publications.

Konstantopoulos, S., & Hedges, L. (2004). Meta-analysis. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 281–300). Thousand Oaks, CA: Sage Publications.

Kovach, M. (2009). Indigenous methodologies: Characteristics, conversations, and contexts . Toronto: University of Toronto Press.

Kovach, M. (2018). Doing indigenous methodologies: A letter to a research class. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (5th ed., pp. 214–234). Los Angeles: Sage Publications.

Krippendorf, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Thousand Oaks, CA: Sage Publications.

Kumar, L., & Bhatia, P. K. (2013). Text mining: concepts, process and applications. Journal of Global Research in Computer Science, 4 (3), 36–39.

Kvale, S. (2007). Doing interviews . London: Sage Publications.

L’Eplattenier, B. E. (2009). An argument for archival research methods: thinking beyond methodology. College English, 72 (1), 67–79.

Lamprianou, I. (2008). Introduction to psychometrics: The case of Rasch models. In J. A. Athanasou (Ed.), Adult educational psychology (pp. 385–418). Rotterdam, Netherlands: Sense Publishers.

Lane, D. C. (1995). On a resurgence of management simulations and games. Journal of the Operations Research Society, 46 (5), 604–625.

Lane, S., Raymond, M. R., & Haladyna, T. M. (2015). Handbook of test development (2nd ed.). London: Routledge.

Laukkanen, M. (1998). Conducting causal mapping research: Opportunities and challenges. In C. Eden & J. Spender (Eds.), Managerial and organizational cognition: Theory, methods and research (pp. 168–191). London: Sage Publications.

Lee, A. T. (2005). Flight simulation: virtual environments in aviation . London: Routledge.

Legard, R., Keegan, J., & Ward, K. (2003). In-depth interviews. In J. Ritchie & J. Lewis (Eds.), Qualitative research practice (pp. 138–169). Los Angeles: Sage Publications.

Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis . Thousand Oaks, CA: Sage Publications.

Louviere, J. J. (1988). Analyzing decision making: Metric conjoint analysis . Newbury Park, CA: Sage Publications.

Maani, K. E., & Cavana, R. Y. (2007). Systems thinking, systems dynamics: Managing change and complexity (2nd ed.). North Shore, NZ: Pearson Education New Zealand.

Malhotra, N. K., Hall, J., Shaw, M. & Oppenheim, P. P. (2008). Essentials of marketing research: An applied approach (2nd Ed.). French’s Forest, NSW: Pearson Education.

Margolis, E., & Pauwels, L. (Eds.). (2011). The Sage handbook of visual research methods . Los Angeles: Sage Publications.

Margolis, E., & Zunjarwad, R. (2018). Visual research. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (5th ed., pp. 600–626). Los Angeles: Sage Publications.

Mathison, S. (1988). Why triangulate? Educational Researcher, 17 (2), 13–17.

McAuley, J. (2004). Hermeneutic understanding. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 192–202). London: Sage Publications.

McDonald, S., Daniels, K., & Harris, C. (2004). Cognitive mapping in organizational research. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 73–85). London: Sage Publications.

Meier, P. S. (2007). Mind-mapping: A tool for eliciting and representing knowledge held by diverse informants. Social Research Update, 52, 1–4.

Mennecke, B., Roche, E., Bray, D., Konsynski, B., Lester, J., Rowe, M., et al. (2008). Second Life and other virtual worlds: A roadmap for research. Communications of the Association for Information Systems , 22 (Article 20), 371–388. Retrieved August 11, 2018, from https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1015&context=scm_pubs .

Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business and Economic Statistics, 13 (2), 151–161.

Miller, J. H., & Page, S. E. (2007). Complex adaptive systems: An introduction to computational models of social life . Princeton, NJ: Princeton University Press.

Milligan, G. W. (1981). A Monte Carlo study of thirty internal criterion measures for cluster analysis. Psychometrika, 46 (2), 187–199.

Minichiello, V., Aroni, R., & Hays, T. (2008). In-depth interviewing: Principles, techniques, analysis . Sydney: Pearson Education.

Montgomery, P., & Bailey, P. H. (2007). Field notes and theoretical memos in grounded theory. Western Journal of Nursing Research, 29 (1), 65–79.

Mooney, C. Z. (1997). Monte Carlo simulation . Thousand Oaks, CA: Sage Publications.

Muchiri, M. (2006). Transformational leader behaviours, social processes of leadership and substitutes for leadership as predictors of employee commitment, efficacy, citizenship behaviours and performance outcomes . Unpublished PhD thesis, New England Business School, University of New England.

Murdock, S. H., Kelley, C., Jordan, J., Pecotte, B., & Luedke, A. (2016). Demographics: A guide to methods and data sources for media, business, and government . London: Routledge.

Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment . Upper Saddle River, NJ: Prentice Hall.

Noblit, G. W., & Hare, R. D. (1988). Meta-ethnography: Synthesizing qualitative studies (Vol. 11). Newbury Park, CA: Sage Publications.

North, M. (2012). Data mining for the masses . Athens: Global Text Project. Retrieved August 11, 2018, from https://s3.amazonaws.com/academia.edu.documents/39657706/North_-_Data_Mining_for_the_Masses_-_2012.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1533962896&Signature=3h603Aq1BUPPSz2Svg0lC9RvwU0%3D&response-content-disposition=inline%3B%20filename%3DData_Mining.pdf .

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

O’Cass, A. (1998). Reconceptualising and reconstructing consumer involvement: modelling involvement in a nomological network of relevant constructs: Casting the net wider or just fishing around . Unpublished PhD thesis, School of Marketing and Management, University of New England, Armidale, NSW, Australia.

Olaru, D., Purchase, S., & Denize, S. (2009). Using docking/replication to verify and validate computational models. In Proceedings of the 18th World IMACS/MODSIM Congress , Cairns, Queensland (pp. 4432–4438). Retrieved August 11, 2018, from https://pdfs.semanticscholar.org/0edc/80bb4313dab4a0e0ee94a508a5a2883d4ebd.pdf .

Omodei, M. M., & Wearing, A. J. (1995). The Fire Chief microworld generating program: An illustration of computer-simulated microworlds as an experimental paradigm for studying complex decision-making behavior. Behavior Research Methods, Instruments, & Computers, 27 (3), 303–316.

Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. (2010). Innovative data collection strategies in qualitative research. The Qualitative Report, 15 (3), 696–726.

Ortlipp, M. (2008). Keeping and using reflective journals in the qualitative research process. The Qualitative Report, 13 (4), 695–705.

Paterson, B. L., Thorne, S. E., Canam, C., & Jillings, C. (2001). Meta-study of qualitative health research: A practical guide to meta-analysis and meta-synthesis . Thousand Oaks, CA: Sage Publications.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker . New York: Cambridge University Press.

Phellas, C. N., Bloch, A., & Seale, C. (2012). Structured methods: interviews, questionnaires and observation. In C. Seale (Ed.), Researching society and culture (3rd ed., pp. 181–205). London: Sage Publications.

Pidd, M. (2009). Tools for thinking: Modelling in management science (3rd ed.). Chichester, UK: John Wiley & Sons.

Pierce, C. A., & Aguinis, H. (1997). Using virtual reality technology in organizational behavior research. Journal of Organizational Behavior, 18 (5), 407–410.

Pink, S. (2013). Doing visual ethnography . Los Angeles: Sage Publications.

Place, U. T. (1992). The role of the ethnomethodological experiment in the empirical investigation of social norms and its application to conceptual analysis. Philosophy of the Social Sciences, 22 (4), 461–474.

Plowright, D. (2011). Using mixed methods: Frameworks for an integrated methodology . Los Angeles: Sage Publications.

Prasad, A. (2002). The contest over meaning: Hermeneutics as an interpretive methodology for understanding texts. Organizational Research Methods, 5 (1), 12–33.

Proctor, T. (2010). Creative problem solving for managers (3rd ed.). New York: Routledge.

Provost, F., & Fawcett, T. (2013). Data science and its relationship to big data and data-driven decision making. Big Data, 1 (1), 51–59.

Punch, K. (2003). Survey research: The basics . London: Sage Publications.

Railsback, S. F., & Grimm, V. (2012). Agent-based and individual-based modeling: A practical introduction . Princeton, NJ: Princeton University Press.

Rapley, T. (2007). Doing conversation, discourse and document analysis . London: Sage Publications.

Reynolds, C. R., Livingston, R. B., Willson, V. L., & Willson, V. (2010). Measurement and assessment in education (2nd ed.). Boston: Pearson Education International.

Richmond, B. (2004). An introduction to systems thinking: STELLA software . Lebanon, OH: IEEE Systems.

Robinson, J. P., Shaver, P. R., & Wrightsman, L. S. (Eds.). (1991). Measures of personality and social psychological attitudes . San Diego: Academic Press.

Rosenthal, R. (1984). Meta-analytic procedures for social research . Beverly Hills, CA: Sage Publications.

Ross, G. (2001). Visual methodologies: An introduction to the interpretation of visual materials . London: Sage Publications.

Rowlinson, M. (2004). Historical analysis of company documents. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 301–311). London: Sage Publications.

Rymaszewski, M., Au, W. J., Wallace, M., Winters, C., Ondrejka, C., & Batstone-Cunningham, B. (2007). Second life: The official guide . Hoboken, NJ: Wiley.

Sandall, J. L. (2006). Navigating pathways through complex systems of interacting problems: Strategic management of native vegetation policy . Unpublished PhD thesis, New England Business School, University of New England, Armidale, NSW, Australia.

Sandelowski, M., Docherty, S., & Emden, C. (1997). Qualitative metasynthesis: Issues and techniques. Research in Nursing & Health, 20 (4), 365–371.

Samra-Fredericks, D. (2004). Talk-in-interaction/conversation analysis. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 214–227). London: Sage Publications.

Sapsford, R. (2007). Survey research (2nd ed.). London: Sage Publications.

Schembri, S., & Boyle, M. V. (2013). Visual ethnography: Achieving rigorous and authentic interpretations. Journal of Business Research, 66 (9), 1251–1254.

Schkade, D. A., & Payne, J. W. (1994). How people respond to contingent valuation questions: A verbal protocol analysis of willingness to pay for an environmental regulation. Journal of Environmental Economics and Management, 26 (1), 88–109.

Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings . Los Angeles: Sage Publications.

Schmitt, R. (2005). Systematic metaphor analysis as a method of qualitative research. The Qualitative Report, 10 (2), 358–394.

Schreiber, R., Crooks, D., & Stern, P. N. (1997). Qualitative meta-analysis. In J. Morse (Ed.), Completing a qualitative project: Details and dialogue (pp. 311–326). Thousand Oaks, CA: Sage Publications.

Schreier, M. (2012). Qualitative content analysis in practice . Los Angeles: Sage Publications.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference (2nd ed.). Boston: Cengage.

Shum, D., O’Gorman, J., Creed, P., & Myors, B. (2017). Psychological testing and assessment ebook (3rd ed.). New York: Oxford University Press.

Senge, P., Kleiner, A., Roberts, C., Ross, R., & Smith, B. (1994). The fifth discipline field book . London: Nicholas Brealey.

Shiratori, R., Arai, K., & Kato, F. (2005). Gaming, simulations and society: Research scope and perspective . Tokyo: Springer.

Sloan, L., & Quan-Haase, A. (Eds.). (2017). The Sage handbook of social media research methods . Los Angeles: Sage Publications.

Small, S. D., Wuerz, R. C., Simon, R., Shapiro, N., Conn, A., & Setnik, G. (1999). Demonstration of high-fidelity simulation team training for emergency medicine. Academic Emergency Medicine, 6 (4), 312–323.

Smith, A. E., & Humphreys, M. S. (2006). Evaluation of unsupervised semantic mapping of natural language with Leximancer concept mapping. Behavior Research Methods, 38 (2), 262–279.

Smith, E. (2008). Pitfalls and promises: The use of secondary data analysis in educational research. British Journal of Educational Studies, 56 (3), 323–339.

Smith, M. (2007). Research methods in accounting . Los Angeles: Sage Publications.

Solórzano, D. G., & Yosso, T. J. (2002). Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative Inquiry, 8 (1), 23–44.

Stengel, D. N., & Chaffe-Stengel, P. (2012). Working with economic indicators: Interpretation and sources . New York: Business Expert Press.

Sterman, J. S. (2000). Business dynamics: Systems thinking and modeling for a complex world . New York: McGraw-Hill/Irwin.

Stewart, D., Shamdasani, P. N., & Rook, D. W. (2007). Focus groups: Theory and practice (2nd ed.). Thousand Oaks, CA: Sage Publications.

Stiles, D. (2004,). Pictorial representation. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 127–140). London: Sage Publications.

Stuart, E. A., & Rubin, D. B. (2008). Best practice in quasi-experimental designs: Matching methods for causal inference. In J. W. Osborne (Ed.), Best practices in quantitative methods (pp. 155–176). Los Angeles: Sage Publications.

Sue, V. M., & Ritter, L. A. (2012). Conducting online surveys (2nd ed.). Los Angeles: Sage Publications.

Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research Journal, 11 (2), 63–75.

Symon, G. (2004). Qualitative research diaries. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 98–113). London: Sage Publications.

Taber, C. S., & Timpone, R. J. (1996). Computational modeling . Thousand Oaks, CA: Sage Publications.

Tansey, O. (2007). Process tracing and elite interviewing: A case for non-probability sampling. PS: Political Science & Politics , 40 (4), 765–772.

Thornton III, G. C., & Kedharnath, U. (2013). Work sample tests. In K. F. Geisinger et al. (Eds.), APA handbook of testing and assessment in psychology, Vol. 1. Test theory and testing and assessment in industrial and organizational psychology (pp. 533–550). Washington, DC: American Psychological Association.

Trenor, J. M., Miller, M. K., & Gipson, K. G. (2011). Utilization of a think-aloud protocol to cognitively validate a survey instrument identifying social capital resources of engineering undergraduates. In Electronic Proceedings of the American Society for Engineering Education Annual Conference and Exposition , Vancouver, CA (pp. 22.1656.1–22.1656.15). Retrieved August 11, 2018, from https://peer.asee.org/utilization-of-a-think-aloud-protocol-to-cognitively-validate-a-survey-instrument-identifying-social-capital-resources-of-engineering-undergraduates .

van Someren, M. W., Barnard, Y. F., & Sandberg, J. A. C. (1994). The think aloud method: A practical approach to modeling cognitive processes . London: Academic Press.

Veal, A. J. (2005). Business research methods: A managerial approach (2nd ed.). French’s Forest, NSW: Pearson Education.

Walker, S. J. (2014). Big data: A revolution that will transform how we live, work, and think. International Journal of Advertising, 33 (1), 181–183.

Walsh, J. J. (1997). Projective testing techniques. In J. P. Keeves (Ed.), Educational research, methodology and measurement: An international handbook (2nd ed., pp. 954–958). Oxford, UK: Pergamon.

Walsh, S., & Clegg, C. (2004). Soft systems analysis: Reflections and update. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 334–348). London: Sage Publications.

Walsh, D., & Downe, S. (2005). Meta-synthesis method for qualitative research: A literature review. Journal of Advanced Nursing, 50 (2), 204–211.

Waddington, D. (2004). Participant observation. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 154–164). London: Sage Publications.

Weber, R. P. (1990). Basic content analysis (2nd ed.). Newbury Park, CA: Sage Publications.

Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (2000). Unobtrusive measures (Rev ed.). Thousand Oaks, CA: Sage Publications.

Webster, J. G. (2015). The physiological measurement handbook . Boca Raton: CRC Press.

Wilcox, R. R. (1997). Simulation as a research technique. In J. P. Keeves (Ed.), Educational research, methodology and measurement: An international handbook (2nd ed., pp. 150–154). Oxford, UK: Pergamon.

Wilensky, U. (1999). NetLogo . Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL. Retrieved August 11, 2018, from http://ccl.northwestern.edu/netlogo/ .

Wilensky, U. (2003). NetLogo Traffic Grid model . Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL. Retrieved August 11, 2018, from http://ccl.northwestern.edu/netlogo/models/TrafficGrid .

Williams, J. H. (2014). Defining and measuring nature: The make of all things . San Raphael, CA: Morgan & Claypool.

Wybo, J. L. (2008). The role of simulation exercises in the assessment of robustness and resilience of private or public organizations. In H. J. Pasman & I. A Kirillov (Eds.), Resilience of cities to terrorist and other threats (pp. 491–507). Dordrecht: Springer.

Yamarone, R. (2017). The economic indicator handbook: How to evaluate economic trends to maximize profits and minimize losses . Hoboken, NJ: Wiley.

Yin, R. K. (2011). Qualitative research from start to finish . New York: The Guilford Press.

Download references

Author information

Authors and affiliations.

UNE Business School, University of New England, Armidale, NSW, Australia

Ray Cooksey

RMIT University Vietnam, Ho Chi Minh City, Vietnam

Gael McDonald

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ray Cooksey .

Appendix: Clarifying Experimental/Quasi-experimental Design Jargon

These contrasting concepts provide insights into the way that researchers, who implement the Manipulative experience-focused strategy under the positivist pattern of guiding assumptions, talk or write about certain features of their research.

versus IVs

A IV has categories that define groups which contain different samples of participants (e.g., a treatment group and a control group). A IV defines groups or conditions, all of which are experienced by each participant or by matched sets of participants such as twins or participants matched on key characteristics. A within groups IV includes intervention time-aligned conditions such as a pre-test and a post-test, giving rise to a class of experiments called ‘repeated measures’ designs)

versus designs

A design involves groups defined by least two IVs where each category of one IV is combined with each category of another IV, such that the groups exhaust all possible combinations (e.g., a quasi-experiment involving the IVs of gender, with 2 categories—male and female, and an experimental IV, with 2 categories—treatment condition and control condition, yields a 2 × 2 factorial design involving four distinct pairings of IVs (male-treatment; male-control; female-treatment; female-control). If you had a between groups factorial design with four IVs and each IV had 2 categories (or ‘levels’), you would have a 2 × 2 × 2 × 2 factorial design and that design would have 16 distinct groups of participants. A design involves groups defined by the categories of one IV being hierarchically embedded inside each category of another IV (e.g., an IV defined by year levels for classes of students at the primary school level is embedded within a second IV defined by specific schools). Nesting means, for example, that a year 6 class in one school cannot be considered equivalent to a year 6 class in another school (different teachers, different curricular emphases, different classroom environments, …), so that classes must be considered as nested within schools. Another type of nested design is a multi-level design, which compares samples defined by IVs at different levels of analysis (e.g., departments within organisations within industries) both within and between those levels

versus

For causal-comparative designs, a comparison of the groups or conditions defined by the categories (or ‘levels’) of a single IV comprises the of that IV on the DV. The comparison of groups simultaneously defined by combinations of the categories of two or more IVs is termed an . An interaction yields a conditional interpretation, where the pattern of relationship between one IV and the DV differs depending upon which category of another IV you choose to look at. Technically speaking, a moderator IV is an interaction IV. Where two IVs define an interaction, this is called a 2-way interaction; three IVs define a 3-way interaction and so on. In a factorial design, there are as many main effects as there are IVs in the design, all possible pairs of IVs form 2-way interactions, all possible triplets of IVs form 3-way interactions and so on. For example, if you had a factorial design involving 4 IVs (call them A, B, C, and D): there would be 4 main effects (A, B, C and D main effects), six 2-way interactions (AB (read as ‘A by B interaction’), AC, AD, BC, BD, CD interactions;), four 3-way interactions (ABC, ABD, ACD, and BCD interactions) and one 4-way interaction (ABCD) to test

or designs

In some design circumstances, it may not be possible or feasible for you to include all possible factorial combinations of IV categories in a research design. For example, if you have four IVs, each with 3 categories, there would be 3 × 3 × 3 × 3 = 81 possible factorial combinations, which may be too many for you to find adequate samples to fill or to have participants rate or evaluate. As an alternative approach, you could employ an incomplete or fractional factorial design, which includes only a specific fraction or proportion of the possible combinations. In the previous example, a 1/3 fractional factorial design would require only 27 combinations instead of 81. The fractional combinations used are identified by sacrificing information about higher order interactions (e.g., three- and four-way interactions) in order to provide viable estimates of lower-order effects, such as main effects and two-way interactions (fractional factorial designs are often used in conjoint measurement designs, for example). One example of an incomplete design is a ‘Latin square’ design, which can control, using counterbalancing, for order effects between conditions or other extraneous/‘nuisance’ variables

(usually categorical/group-based) versus IVs

A manipulated IV is one where you can control who experiences a specific category of the IV (e.g., treatment or control conditions) using random assignment of participants to category. In contrast, a measured IV is one where you must take the IV as having a pre-existing value with respect to every participant and therefore you can only measure it (e.g., age, gender, ethnic background). Thus, in a true experiment, you seek to manipulate all IVs being evaluated whereas in a quasi-experiment, you generally have a mix of manipulated IVs and measured IVs

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Cooksey, R., McDonald, G. (2019). What Data Gathering Strategies Should I Use?. In: Surviving and Thriving in Postgraduate Research. Springer, Singapore. https://doi.org/10.1007/978-981-13-7747-1_14

Download citation

DOI : https://doi.org/10.1007/978-981-13-7747-1_14

Published : 28 June 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-13-7746-4

Online ISBN : 978-981-13-7747-1

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is the data gathering procedure in research

Home QuestionPro QuestionPro Products

Data Collection Methods: Types & Examples

data-collection-methods

Data is a collection of facts, figures, objects, symbols, and events from different sources. Organizations collect data using various methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various times.

For example, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. 

Although data is a valuable asset for every organization, it does not serve any purpose until it is analyzed or processed to achieve the desired results.

What are Data Collection Methods?

Data collection methods are techniques and procedures for gathering information for research purposes. They can range from simple self-reported surveys to more complex quantitative or qualitative experiments.

Some common data collection methods include surveys , interviews, observations, focus groups, experiments, and secondary data analysis . The data collected through these methods can then be analyzed to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Understanding Data Collection Methods

Data collection methods encompass a variety of techniques and tools for gathering quantitative and qualitative data. These methods are integral to the data collection and ensure accurate and comprehensive data acquisition. 

Quantitative data collection methods involve systematic approaches, such as

  • Numerical data,
  • Surveys, polls and
  • Statistical analysis
  • To quantify phenomena and trends. 

Conversely, qualitative data collection methods focus on capturing non-numerical information, such as interviews, focus groups, and observations, to delve deeper into understanding attitudes, behaviors, and motivations. 

Combining quantitative and qualitative data collection techniques can enrich organizations’ datasets and gain comprehensive insights into complex phenomena.

Effective utilization of accurate data collection tools and techniques enhances the accuracy and reliability of collected data, facilitating informed decision-making and strategic planning.

Learn more about what is Self-Selection Bias, methods & its examples

Importance of Data Collection Methods

Data collection methods play a crucial role in the research process as they determine the quality and accuracy of the data collected. Here are some major importance of data collection methods.

  • Quality and Accuracy: The choice of data collection technique directly impacts the quality and accuracy of the data obtained. Properly designed methods help ensure that the data collected is error-free and relevant to the research questions.
  • Relevance, Validity, and Reliability: Effective data collection methods help ensure that the data collected is relevant to the research objectives, valid (measuring what it intends to measure), and reliable (consistent and reproducible).
  • Bias Reduction and Representativeness: Carefully chosen data collection methods can help minimize biases inherent in the research process, such as sampling or response bias. They also aid in achieving a representative sample, enhancing the findings’ generalizability.
  • Informed Decision Making: Accurate and reliable data collected through appropriate methods provide a solid foundation for making informed decisions based on research findings. This is crucial for both academic research and practical applications in various fields.
  • Achievement of Research Objectives: Data collection methods should align with the research objectives to ensure that the collected data effectively addresses the research questions or hypotheses. Properly collected data facilitates the attainment of these objectives.
  • Support for Validity and Reliability: Validity and reliability are essential to research validity. The choice of data collection methods can either enhance or detract from the validity and reliability of research findings. Therefore, selecting appropriate methods is critical for ensuring the credibility of the research.

The importance of data collection methods cannot be overstated, as they play a key role in the research study’s overall success and internal validity .

Types of Data Collection Methods

The choice of data collection method depends on the research question being addressed, the type of data needed, and the resources and time available. Data collection methods can be categorized into primary and secondary methods.

Data Collection Methods

1. Primary Data Collection Methods

Primary data is collected from first-hand experience and is not used in the past. The data gathered by primary data collection methods are highly accurate and specific to the research’s motive.

Primary data collection methods can be divided into two categories: quantitative and qualitative.

Quantitative Methods:

Quantitative techniques for market research and demand forecasting usually use statistical tools. In these techniques, demand is forecasted based on historical data. These methods of primary data collection are generally used to make long-term forecasts. Statistical analysis methods are highly reliable as subjectivity is minimal.

  • Time Series Analysis: A time series refers to a sequential order of values of a variable, known as a trend, at equal time intervals. Using patterns, an organization can predict the demand for its products and services over a projected time period. 
  • Smoothing Techniques: Smoothing techniques can be used in cases where the time series lacks significant trends. They eliminate random variation from the historical demand, helping identify patterns and demand levels to estimate future demand.  The most common methods used in smoothing demand forecasting are the simple moving average and weighted moving average methods. 
  • Barometric Method: Also known as the leading indicators approach, researchers use this method to speculate future trends based on current developments. When past events are considered to predict future events, they act as leading indicators.

what is the data gathering procedure in research

Qualitative Methods:

Qualitative data collection methods are especially useful when historical data is unavailable or when numbers or mathematical calculations are unnecessary.

Qualitative research is closely associated with words, sounds, feelings, emotions, colors, and non-quantifiable elements. These techniques are based on experience, judgment, intuition, conjecture, emotion, etc.

Quantitative methods do not provide the motive behind participants’ responses, often don’t reach underrepresented populations, and require long periods of time to collect the data. Hence, it is best to combine quantitative methods with qualitative methods.

1. Surveys: Surveys collect data from the target audience and gather insights into their preferences, opinions, choices, and feedback related to their products and services. Most survey software offers a wide range of question types.

You can also use a ready-made survey template to save time and effort. Online surveys can be customized to match the business’s brand by changing the theme, logo, etc. They can be distributed through several channels, such as email, website, offline app, QR code, social media, etc. 

You can select the channel based on your audience’s type and source. Once the data is collected, survey software can generate reports and run analytics algorithms to discover hidden insights. 

A survey dashboard can give you statistics related to response rate, completion rate, demographics-based filters, export and sharing options, etc. Integrating survey builders with third-party apps can maximize the effort spent on online real-time data collection . 

Practical business intelligence relies on the synergy between analytics and reporting , where analytics uncovers valuable insights, and reporting communicates these findings to stakeholders.

2. Polls: Polls comprise one single or multiple-choice question . They are useful when you need to get a quick pulse of the audience’s sentiments. Because they are short, it is easier to get responses from people.

Like surveys, online polls can be embedded into various platforms. Once the respondents answer the question, they can also be shown how their responses compare to others.

Interviews: In this method, the interviewer asks the respondents face-to-face or by telephone. 

3. Interviews: In face-to-face interviews, the interviewer asks a series of questions to the interviewee in person and notes down responses. If it is not feasible to meet the person, the interviewer can go for a telephone interview. 

This form of data collection is suitable for only a few respondents. It is too time-consuming and tedious to repeat the same process if there are many participants.

what is the data gathering procedure in research

4. Delphi Technique: In the Delphi method, market experts are provided with the estimates and assumptions of other industry experts’ forecasts. Based on this information, experts may reconsider and revise their estimates and assumptions. The consensus of all experts on demand forecasts constitutes the final demand forecast.

5. Focus Groups: Focus groups are one example of qualitative data in education . In a focus group, a small group of people, around 8-10 members, discuss the common areas of the research problem. Each individual provides his or her insights on the issue concerned. 

A moderator regulates the discussion among the group members. At the end of the discussion, the group reaches a consensus.

6. Questionnaire: A questionnaire is a printed set of open-ended or closed-ended questions that respondents must answer based on their knowledge and experience with the issue. The questionnaire is part of the survey, whereas the questionnaire’s end goal may or may not be a survey.

7. Digsite: Digsite is a purpose-built platform for conducting fast and flexible qualitative research, enabling users to understand the ‘whys’ behind consumer behavior. With Digsite, businesses can efficiently recruit targeted participants and gather rich qualitative insights through various methods, such as

  • Live video interviews,
  • Focus groups.

The platform supports agile, iterative learning by blending surveys, open-ended research, and intelligent dashboards for actionable results. Its natural language processing (NLP) and AI capabilities offer deeper emotional insights, enhancing user experience and product development. Supporting over 50 languages and ensuring compliance with regulations like GDPR and HIPAA, Digsite provides a secure and comprehensive research solution.

2. Secondary Data Collection Methods

Secondary data is data that has been used in the past. The researcher can obtain data from the data sources , both internal and external, to the organizational data . 

Internal sources of secondary data:

  • Organization’s health and safety records
  • Mission and vision statements
  • Financial Statements
  • Sales Report
  • CRM Software
  • Executive summaries

External sources of secondary data:

  • Government reports
  • Press releases
  • Business journals

Secondary data collection methods can also involve quantitative and qualitative techniques. Secondary data is easily available, less time-consuming, and expensive than primary data. However, the authenticity of the data gathered cannot be verified using these methods.

Secondary data collection methods can also involve quantitative and qualitative observation techniques. Secondary data is easily available, less time-consuming, and more expensive than primary data. 

However, the authenticity of the data gathered cannot be verified using these methods.

Regardless of the data collection method of your choice, there must be direct communication with decision-makers so that they understand and commit to acting according to the results.

For this reason, we must pay special attention to the analysis and presentation of the information obtained. Remember that these data must be useful and functional to us, so the data collection method has much to do with it.

LEARN ABOUT: Data Asset Management: What It Is & How to Manage It

Steps in the Data Collection Process

The data collection process typically involves several key steps to ensure the accuracy and reliability of the data gathered. These steps provide a structured approach to gathering and analyzing data effectively. Here are the key steps in the data collection process:

  • Define the Objectives: Clearly outline the goals of the data collection. What questions are you trying to answer?
  • Identify Data Sources: Determine where the data will come from. This could include surveys, interviews, existing databases, or observational data .
  • Surveys and questionnaires
  • Interviews (structured or unstructured)
  • Focus groups
  • Observational Research
  • Document analysis
  • Develop Data Collection Instruments: Create or adapt tools for collecting data, such as questionnaires or interview guides. Ensure they are valid and reliable.
  • Select a Sample: If you are not collecting data from the entire population, determine how to select your sample. Consider sampling methods like random, stratified, or convenience sampling .
  • Collect Data: Execute your data collection plan , following ethical guidelines and maintaining data integrity.
  • Store Data: Organize and store collected data securely, ensuring it’s easily accessible for analysis while maintaining confidentiality.
  • Analyze Data: After collecting the data, process and analyze it according to your objectives, using appropriate statistical or qualitative methods.
  • Interpret Results: Conclude your analysis, relating them back to your original objectives and research questions.
  • Report Findings: Present your findings clearly and organized, using visuals and summaries to communicate insights effectively.
  • Evaluate the Process: Reflect on the data collection process. Assess what worked well and what could be improved for future studies.

Recommended Data Collection Tools

Choosing the right data collection tools depends on your specific needs, such as the type of data you’re collecting, the scale of your project, and your budget. Here are some widely used tools across different categories:

Survey Tools

Survey tools are software applications designed to collect quantitative data from a large audience through structured questionnaires. These tools are ideal for gathering customer feedback, employee opinions, or market research insights. They offer features like customizable templates, real-time analytics, and multiple distribution channels to help you reach your target audience effectively.

  • QuestionPro: Offers advanced survey features and analytics.
  • SurveyMonkey: User-friendly interface with customizable survey options.
  • Google Forms: Free and easy to use, suitable for simple surveys.

Interview and Focus Group Tools

Interview and focus group tools facilitate the collection of qualitative data through guided conversations and group discussions. These tools often include features for recording, transcribing, and analyzing spoken interactions, enabling researchers to gain in-depth insights into participants’ thoughts, attitudes, and behaviors.

  • Zoom: Great for virtual interviews and focus group discussions.
  • Microsoft Teams: Offers features for collaboration and recording sessions.

Observation and Field Data Collection

  • Open Data Kit (ODK): This is for mobile data collection in field settings.
  • REDCap: A secure web application for building and managing online surveys.

Mobile Data Collection

Mobile data collection tools leverage smartphones and tablets to gather data on the go. These tools enable users to collect data offline and sync it when an internet connection is available. They are ideal for remote areas or fieldwork where traditional data collection methods are impractical, offering features like GPS tagging, photo capture, and form-based inputs.

  • KoboToolbox: Designed for humanitarian work, useful for field data collection.
  • SurveyCTO: Provides offline data collection capabilities for mobile devices.

Data Analysis Tools

Data analysis tools are software applications that process and analyze quantitative data, helping researchers identify patterns, trends, and insights. These tools support various statistical methods and data visualization techniques, allowing users to interpret data effectively and make informed decisions based on their findings.

  • Tableau: Powerful data visualization tool to analyze survey results.
  • SPSS: Widely used for statistical analysis in research.

Qualitative Data Analysis

Qualitative data analysis tools help researchers organize, code, and interpret non-numerical data, such as text, images, and videos. These tools are essential for analyzing interview transcripts, open-ended survey responses, and social media content, providing features like thematic analysis, sentiment analysis, and visualization of qualitative patterns.

  • NVivo: For analyzing qualitative data like interviews or open-ended survey responses.
  • Dedoose: Useful for mixed-methods research, combining qualitative and quantitative data.

General Data Collection and Management

General data collection and management tools provide a comprehensive solution for collecting, storing, and organizing data from various sources. These tools often include features for data integration, cleansing, and security, ensuring that data is accessible and usable for analysis across different departments and projects. They are ideal for organizations looking to streamline their data management processes and enhance collaboration.

  • Airtable: Combines spreadsheet and database functionalities for organizing data.
  • Microsoft Excel: A versatile tool for data entry, analysis, and visualization.

If you are interested in purchasing, we invite you to visit our article, where we dive deeper and analyze the best data collection tools in the industry.

How Can QuestionPro Help to Create Effective Data Collection?

QuestionPro is a comprehensive online survey software platform that can greatly assist in various data collection methods. Here’s how it can help:

  • Survey Creation: QuestionPro offers a user-friendly interface for creating surveys with various question types, including multiple-choice, open-ended, Likert scale, and more. Researchers can customize surveys to fit their specific research needs and objectives.
  • Diverse Distribution Channels: The platform provides multiple channels for distributing surveys, including email, web links, social media, and website embedding surveys. This enables researchers to reach a wide audience and collect data efficiently.
  • Panel Management: QuestionPro offers panel management features, allowing researchers to create and manage panels of respondents for targeted data collection. This is particularly useful for longitudinal studies or when targeting specific demographics.
  • Data Analysis Tools: The platform includes robust data analysis tools that enable researchers to analyze survey responses in real time. Researchers can generate customizable reports, visualize data through charts and graphs, and identify trends and patterns within the data.
  • Data Security and Compliance: QuestionPro prioritizes data security and compliance with regulations such as GDPR and HIPAA. The platform offers features such as SSL encryption, data masking, and secure data storage to ensure the confidentiality and integrity of collected data.
  • Mobile Compatibility: With the increasing use of mobile devices, QuestionPro ensures that surveys are mobile-responsive, allowing respondents to participate in surveys conveniently from their smartphones or tablets.
  • Integration Capabilities: QuestionPro integrates with various third-party tools and platforms, including CRMs, email marketing software, and analytics tools. This allows researchers to streamline their data collection processes and incorporate survey data into their existing workflows.
  • Customization and Branding: Researchers can customize surveys with their branding elements, such as logos, colors, and themes, enhancing the professional appearance of surveys and increasing respondent engagement.

The conclusion you obtain from your investigation will set the course of the company’s decision-making, so present your report clearly and list the steps you followed to obtain those results.

Make sure that whoever will take the corresponding actions understands the importance of the information collected and that it gives them the solutions they expect.

QuestionPro offers a comprehensive suite of features and tools that can significantly streamline the data collection process, from survey creation to analysis, while ensuring data security and compliance. Remember that at QuestionPro, we can help you collect data easily and efficiently. Request a demo and learn about all the tools we have for you.

Frequently Asked Questions (FAQs)

A: Common methods include surveys, interviews, observations, focus groups, and experiments.

A: Data collection helps organizations make informed decisions and understand trends, customer preferences, and market demands.

A: Quantitative methods focus on numerical data and statistical analysis, while qualitative methods explore non-numerical insights like attitudes and behaviors.

A: Yes, combining methods can provide a more comprehensive understanding of the research topic.

A: Technology streamlines data collection with tools like online surveys, mobile data gathering, and integrated analytics platforms.

MORE LIKE THIS

what is the data gathering procedure in research

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Microsoft Customer Voice vs QuestionPro

Microsoft Customer Voice vs QuestionPro: Choosing the Best

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

caltech

  • Data Science

Caltech Bootcamp / Blog / /

Data Collection Methods: A Comprehensive View

  • Written by John Terra
  • Updated on February 21, 2024

What Is Data Processing

Companies that want to be competitive in today’s digital economy enjoy the benefit of countless reams of data available for market research. In fact, thanks to the advent of big data, there’s a veritable tidal wave of information ready to be put to good use, helping businesses make intelligent decisions and thrive.

But before that data can be used, it must be processed. But before it can be processed, it must be collected, and that’s what we’re here for. This article explores the subject of data collection. We will learn about the types of data collection methods and why they are essential.

We will detail primary and secondary data collection methods and discuss data collection procedures. We’ll also share how you can learn practical skills through online data science training.

But first, let’s get the definition out of the way. What is data collection?

What is Data Collection?

Data collection is the act of collecting, measuring and analyzing different kinds of information using a set of validated standard procedures and techniques. The primary objective of data collection procedures is to gather reliable, information-rich data and analyze it to make critical business decisions. Once the desired data is collected, it undergoes a process of data cleaning and processing to make the information actionable and valuable for businesses.

Your choice of data collection method (or alternately called a data gathering procedure) depends on the research questions you’re working on, the type of data required, and the available time and resources and time. You can categorize data-gathering procedures into two main methods:

  • Primary data collection . Primary data is collected via first-hand experiences and does not reference or use the past. The data obtained by primary data collection methods is exceptionally accurate and geared to the research’s motive. They are divided into two categories: quantitative and qualitative. We’ll explore the specifics later.
  • Secondary data collection. Secondary data is the information that’s been used in the past. The researcher can obtain data from internal and external sources, including organizational data.

Let’s take a closer look at specific examples of both data collection methods.

Also Read: Why Use Python for Data Science?

The Specific Types of Data Collection Methods

As mentioned, primary data collection methods are split into quantitative and qualitative. We will examine each method’s data collection tools separately. Then, we will discuss secondary data collection methods.

Quantitative Methods

Quantitative techniques for demand forecasting and market research typically use statistical tools. When using these techniques, historical data is used to forecast demand. These primary data-gathering procedures are most often used to make long-term forecasts. Statistical analysis methods are highly reliable because they carry minimal subjectivity.

  • Barometric Method. Also called the leading indicators approach, data analysts and researchers employ this method to speculate on future trends based on current developments. When past events are used to predict future events, they are considered leading indicators.
  • Smoothing Techniques. Smoothing techniques can be used in cases where the time series lacks significant trends. These techniques eliminate random variation from historical demand and help identify demand levels and patterns to estimate future demand. The most popular methods used in these techniques are the simple moving average and the weighted moving average methods.
  • Time Series Analysis. The term “time series” refers to the sequential order of values in a variable, also known as a trend, at equal time intervals. Using patterns, organizations can predict customer demand for their products and services during the projected time.

Qualitative Methods

Qualitative data collection methods are instrumental when no historical information is available, or numbers and mathematical calculations aren’t required. Qualitative research is closely linked to words, emotions, sounds, feelings, colors, and other non-quantifiable elements. These techniques rely on experience, conjecture, intuition, judgment, emotion, etc. Quantitative methods do not provide motives behind the participants’ responses. Additionally, they often don’t reach underrepresented populations and usually involve long data collection periods. Therefore, you get the best results using quantitative and qualitative methods together.

  • Questionnaires . Questionnaires are a printed set of either open-ended or closed-ended questions. Respondents must answer based on their experience and knowledge of the issue. A questionnaire is a part of a survey, while the questionnaire’s end goal doesn’t necessarily have to be a survey.
  • Surveys. Surveys collect data from target audiences, gathering insights into their opinions, preferences, choices, and feedback on the organization’s goods and services. Most survey software has a wide range of question types, or you can also use a ready-made survey template that saves time and effort. Surveys can be distributed via different channels such as e-mail, offline apps, websites, social media, QR codes, etc.

Once researchers collect the data, survey software generates reports and runs analytics algorithms to uncover hidden insights. Survey dashboards give you statistics relating to completion rates, response rates, filters based on demographics, export and sharing options, etc. Practical business intelligence depends on the synergy between analytics and reporting. Analytics uncovers valuable insights while reporting communicates these findings to the stakeholders.

  • Polls. Polls consist of one or more multiple-choice questions. Marketers can turn to polls when they want to take a quick snapshot of the audience’s sentiments. Since polls tend to be short, getting people to respond is more manageable. Like surveys, online polls can be embedded into various media and platforms. Once the respondents answer the question(s), they can be shown how they stand concerning other people’s responses.
  • Delphi Technique. The name is a callback to the Oracle of Delphi, a priestess at Apollo’s temple in ancient Greece, renowned for her prophecies. In this method, marketing experts are given the forecast estimates and assumptions made by other industry experts. The first batch of experts may then use the information provided by the other experts to revise and reconsider their estimates and assumptions. The total expert consensus on the demand forecasts creates the final demand forecast.
  • Interviews. In this method, interviewers talk to the respondents either face-to-face or by telephone. In the first case, the interviewer asks the interviewee a series of questions in person and notes the responses. The interviewer can opt for a telephone interview if the parties cannot meet in person. This data collection form is practical for use with only a few respondents; repeating the same process with a considerably larger group takes longer.
  • Focus Groups. Focus groups are one of the primary examples of qualitative data in education. In focus groups, small groups of people, usually around 8-10 members, discuss the research problem’s common aspects. Each person provides their insights on the issue, and a moderator regulates the discussion. When the discussion ends, the group reaches a consensus.

Also Read: A Beginner’s Guide to the Data Science Process

Secondary Data Collection Methods

Secondary data is the information that’s been used in past situations. Secondary data collection methods can include quantitative and qualitative techniques. In addition, secondary data is easily available, so it’s less time-consuming and expensive than using primary data. However, the authenticity of data gathered with secondary data collection tools cannot be verified.

Internal secondary data sources:

  • CRM Software
  • Executive summaries
  • Financial Statements
  • Mission and vision statements
  • Organization’s health and safety records
  • Sales Reports

External secondary data sources:

  • Business journals
  • Government reports
  • Press releases

The Importance of Data Collection Methods

Data collection methods play a critical part in the research process as they determine the accuracy and quality and accuracy of the collected data. Here’s a sample of some reasons why data collection procedures are so important:

  • They determine the quality and accuracy of collected data
  • They ensure the data and the research findings are valid, relevant and reliable
  • They help reduce bias and increase the sample’s representation
  • They are crucial for making informed decisions and arriving at accurate conclusions
  • They provide accurate data, which facilitates the achievement of research objectives

Also Read: What Is Data Processing? Definition, Examples, Trends

So, What’s the Difference Between Data Collecting and Data Processing?

Data collection is the first step in the data processing process. Data collection involves gathering information (raw data) from various sources such as interviews, surveys, questionnaires, etc. Data processing describes the steps taken to organize, manipulate and transform the collected data into a useful and meaningful resource. This process may include tasks such as cleaning and validating data, analyzing and summarizing data, and creating visualizations or reports.

So, data collection is just one step in the overall data processing chain of events.

Do You Want to Become a Data Scientist?

If this discussion about data collection and the professionals who conduct it has sparked your enthusiasm for a new career, why not check out this online data science program ?

The Glassdoor.com jobs website shows that data scientists in the United States typically make an average yearly salary of $129,127 plus additional bonuses and cash incentives. So, if you’re interested in a new career or are already in the field but want to upskill or refresh your current skill set, sign up for this bootcamp and prepare to tackle the challenges of today’s big data.

You might also like to read:

Navigating Data Scientist Roles and Responsibilities in Today’s Market

Differences Between Data Scientist and Data Analyst: Complete Explanation

What Is Data Collection? A Guide for Aspiring Data Scientists

A Data Scientist Job Description: The Roles and Responsibilities in 2024

Top Data Science Projects With Source Code to Try

Data Science Bootcamp

  • Learning Format:

Online Bootcamp

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Recommended Articles

what is bayesian statistics

What is Bayesian Statistics, and How Does it Differ from Classical Methods?

What is Bayesian statistics? Learn about this tool used in data science, its fundamentals, uses, and advantages.

what is data imputation

What is Data Imputation, and How Can You Use it to Handle Missing Data?

This article defines data imputation and demonstrates its importance, techniques, and challenges.

What is Data Governance

What is Data Governance, How Does it Work, Who Performs it, and Why is it Essential?

What is data governance? This article explores its goals and components, how to implement it, best practices, and more.

What is Data Visualization

What is Data Visualization, and What is its Role in Data Science?

Visualizing data can transform complex information into meaningful insights. This guide answers the question: “What is data visualization?” and discusses everything you need to know about it.

Data Science in Finance

Technology at Work: Data Science in Finance

In today’s data-driven world, industries leverage advanced data analytics and AI-powered tools to improve services and their bottom line. The financial services industry is at the forefront of this innovation. This blog discusses data science in finance, including how companies use it, the skills required to leverage it, and more.

Data Science Interview Questions

The Top Data Science Interview Questions for 2024

This article covers popular basic and advanced data science interview questions and the difference between data analytics and data science.

Learning Format

Program Benefits

  • 12+ tools covered, 25+ hands-on projects
  • Masterclasses by distinguished Caltech CTME instructors
  • Caltech CTME Circle Membership
  • Industry-specific training from global experts
  • Call us on : 1800-212-7688

SurveyCTO

A Guide to Data Collection: Methods, Process, and Tools

A hand holds a smartphone in a green field.

Whether your field is development economics, international development, the nonprofit sector, or myriad other industries, effective data collection is essential. It informs decision-making and increases your organization’s impact. However, the process of data collection can be complex and challenging. If you’re in the beginning stages of creating a data collection process, this guide is for you. It outlines tested methods, efficient procedures, and effective tools to help you improve your data collection activities and outcomes. At SurveyCTO, we’ve used our years of experience and expertise to build a robust, secure, and scalable mobile data collection platform. It’s trusted by respected institutions like The World Bank, J-PAL, Oxfam, and the Gates Foundation, and it’s changed the way many organizations collect and use data. With this guide, we want to share what we know and help you get ready to take the first step in your data collection journey.

Main takeaways from this guide

  • Before starting the data collection process, define your goals and identify data sources, which can be primary (first-hand research) or secondary (existing resources).
  • Your data collection method should align with your goals, resources, and the nature of the data needed. Surveys, interviews, observations, focus groups, and forms are common data collection methods. 
  • Sampling involves selecting a representative group from a larger population. Choosing the right sampling method to gather representative and relevant data is crucial.
  • Crafting effective data collection instruments like surveys and questionnaires is key. Instruments should undergo rigorous testing for reliability and accuracy.
  • Data collection is an ongoing, iterative process that demands real-time monitoring and adjustments to ensure high-quality, reliable results.
  • After data collection, data should be cleaned to eliminate errors and organized for efficient analysis. The data collection journey further extends into data analysis, where patterns and useful information that can inform decision-making are discovered.
  • Common challenges in data collection include data quality and consistency issues, data security concerns, and limitations with offline surveys . Employing robust data validation processes, implementing strong security protocols, and using offline-enabled data collection tools can help overcome these challenges.
  • Data collection, entry, and management tools and data analysis, visualization, reporting, and workflow tools can streamline the data collection process, improve data quality, and facilitate data analysis.

What is data collection?

SurveyCTO Collect app on a tablet and mobile device

The traditional definition of data collection might lead us to think of gathering information through surveys, observations, or interviews. However, the modern-age definition of data collection extends beyond conducting surveys and observations. It encompasses the systematic gathering and recording of any kind of information through digital or manual methods. Data collection can be as routine as a doctor logging a patient’s information into an electronic medical record system during each clinic visit, or as specific as keeping a record of mosquito nets delivered to a rural household.

Getting started with data collection

what is the data gathering procedure in research

Before starting your data collection process, you must clearly understand what you aim to achieve and how you’ll get there. Below are some actionable steps to help you get started.

1. Define your goals

Defining your goals is a crucial first step. Engage relevant stakeholders and team members in an iterative and collaborative process to establish clear goals. It’s important that projects start with the identification of key questions and desired outcomes to ensure you focus your efforts on gathering the right information. 

Start by understanding the purpose of your project– what problem are you trying to solve, or what change do you want to bring about? Think about your project’s potential outcomes and obstacles and try to anticipate what kind of data would be useful in these scenarios. Consider who will be using the data you collect and what data would be the most valuable to them. Think about the long-term effects of your project and how you will measure these over time. Lastly, leverage any historical data from previous projects to help you refine key questions that may have been overlooked previously. 

Once questions and outcomes are established, your data collection goals may still vary based on the context of your work. To demonstrate, let’s use the example of an international organization working on a healthcare project in a remote area.

  • If you’re a researcher , your goal will revolve around collecting primary data to answer specific questions. This could involve designing a survey or conducting interviews to collect first-hand data on patient improvement, disease or illness prevalence, and behavior changes (such as an increase in patients seeking healthcare).
  • If you’re part of the monitoring and evaluation ( M&E) team , your goal will revolve around measuring the success of your healthcare project. This could involve collecting primary data through surveys or observations and developing a dashboard to display real-time metrics like the number of patients treated, percentage of reduction in incidences of disease,, and average patient wait times. Your focus would be using this data to implement any needed program changes and ensure your project meets its objectives.
  • If you’re part of a field team , your goal will center around the efficient and accurate execution of project plans. You might be responsible for using data collection tools to capture pertinent information in different settings, such as in interviews takendirectly from the sample community or over the phone. The data you collect and manage will directly influence the operational efficiency of the project and assist in achieving the project’s overarching objectives.

2. Identify your data sources

The crucial next step in your research process is determining your data source. Essentially, there are two main data types to choose from: primary and secondary.

  • Primary data is the information you collect directly from first-hand engagements. It’s gathered specifically for your research and tailored to your research question. Primary data collection methods can range from surveys and interviews to focus groups and observations. Because you design the data collection process, primary data can offer precise, context-specific information directly related to your research objectives. For example, suppose you are investigating the impact of a new education policy. In that case, primary data might be collected through surveys distributed to teachers or interviews with school administrators dealing directly with the policy’s implementation.
  • Secondary data, on the other hand, is derived from resources that already exist. This can include information gathered for other research projects, administrative records, historical documents, statistical databases, and more. While not originally collected for your specific study, secondary data can offer valuable insights and background information that complement your primary data. For instance, continuing with the education policy example, secondary data might involve academic articles about similar policies, government reports on education or previous survey data about teachers’ opinions on educational reforms.

While both types of data have their strengths, this guide will predominantly focus on primary data and the methods to collect it. Primary data is often emphasized in research because it provides fresh, first-hand insights that directly address your research questions. Primary data also allows for more control over the data collection process, ensuring data is relevant, accurate, and up-to-date.

However, secondary data can offer critical context, allow for longitudinal analysis, save time and resources, and provide a comparative framework for interpreting your primary data. It can be a crucial backdrop against which your primary data can be understood and analyzed. While we focus on primary data collection methods in this guide, we encourage you not to overlook the value of incorporating secondary data into your research design where appropriate.

3. Choose your data collection method

When choosing your data collection method, there are many options at your disposal. Data collection is not limited to methods like surveys and interviews. In fact, many of the processes in our daily lives serve the goal of collecting data, from intake forms to automated endpoints, such as payment terminals and mass transit card readers. Let us dive into some common types of data collection methods: 

Surveys and Questionnaires

Surveys and questionnaires are tools for gathering information about a group of individuals, typically by asking them predefined questions. They can be used to collect quantitative and qualitative data and be administered in various ways, including online, over the phone, in person (offline), or by mail.

  • Advantages : They allow researchers to reach many participants quickly and cost-effectively, making them ideal for large-scale studies. The structured format of questions makes analysis easier.
  • Disadvantages : They may not capture complex or nuanced information as participants are limited to predefined response choices. Also, there can be issues with response bias, where participants might provide socially desirable answers rather than honest ones.

Interviews involve a one-on-one conversation between the researcher and the participant. The interviewer asks open-ended questions to gain detailed information about the participant’s thoughts, feelings, experiences, and behaviors.

  • Advantages : They allow for an in-depth understanding of the topic at hand. The researcher can adapt the questioning in real time based on the participant’s responses, allowing for more flexibility.
  • Disadvantages : They can be time-consuming and resource-intensive, as they require trained interviewers and a significant amount of time for both conducting and analyzing responses. They may also introduce interviewer bias if not conducted carefully, due to how an interviewer presents questions and perceives the respondent, and how the respondent perceives the interviewer. 

Observations

Observations involve directly observing and recording behavior or other phenomena as they occur in their natural settings.

  • Advantages : Observations can provide valuable contextual information, as researchers can study behavior in the environment where it naturally occurs, reducing the risk of artificiality associated with laboratory settings or self-reported measures.
  • Disadvantages : Observational studies may suffer from observer bias, where the observer’s expectations or biases could influence their interpretation of the data. Also, some behaviors might be altered if subjects are aware they are being observed.

Focus Groups

Focus groups are guided discussions among selected individuals to gain information about their views and experiences.

  • Advantages : Focus groups allow for interaction among participants, which can generate a diverse range of opinions and ideas. They are good for exploring new topics where there is little pre-existing knowledge.
  • Disadvantages : Dominant voices in the group can sway the discussion, potentially silencing less assertive participants. They also require skilled facilitators to moderate the discussion effectively.

Forms are standardized documents with blank fields for collecting data in a systematic manner. They are often used in fields like Customer Relationship Management (CRM) or Electronic Medical Records (EMR) data entry. Surveys may also be referred to as forms.

  • Advantages : Forms are versatile, easy to use, and efficient for data collection. They can streamline workflows by standardizing the data entry process.
  • Disadvantages : They may not provide in-depth insights as the responses are typically structured and limited. There is also potential for errors in data entry, especially when done manually.

Selecting the right data collection method should be an intentional process, taking into consideration the unique requirements of your project. The method selected should align with your goals, available resources, and the nature of the data you need to collect.

If you aim to collect quantitative data, surveys, questionnaires, and forms can be excellent tools, particularly for large-scale studies. These methods are suited to providing structured responses that can be analyzed statistically, delivering solid numerical data.

However, if you’re looking to uncover a deeper understanding of a subject, qualitative data might be more suitable. In such cases, interviews, observations, and focus groups can provide richer, more nuanced insights. These methods allow you to explore experiences, opinions, and behaviors deeply. Some surveys can also include open-ended questions that provide qualitative data.

The cost of data collection is also an important consideration. If you have budget constraints, in-depth, in-person conversations with every member of your target population may not be practical. In such cases, distributing questionnaires or forms can be a cost-saving approach.

Additional considerations include language barriers and connectivity issues. If your respondents speak different languages, consider translation services or multilingual data collection tools . If your target population resides in areas with limited connectivity and your method will be to collect data using mobile devices, ensure your tool provides offline data collection , which will allow you to carry out your data collection plan without internet connectivity.

4. Determine your sampling method

Now that you’ve established your data collection goals and how you’ll collect your data, the next step is deciding whom to collect your data from. Sampling involves carefully selecting a representative group from a larger population. Choosing the right sampling method is crucial for gathering representative and relevant data that aligns with your data collection goal.

Consider the following guidelines to choose the appropriate sampling method for your research goal and data collection method:

  • Understand Your Target Population: Start by conducting thorough research of your target population. Understand who they are, their characteristics, and subgroups within the population.
  • Anticipate and Minimize Biases: Anticipate and address potential biases within the target population to help minimize their impact on the data. For example, will your sampling method accurately reflect all ages, gender, cultures, etc., of your target population? Are there barriers to participation for any subgroups? Your sampling method should allow you to capture the most accurate representation of your target population.
  • Maintain Cost-Effective Practices: Consider the cost implications of your chosen sampling methods. Some sampling methods will require more resources, time, and effort. Your chosen sampling method should balance the cost factors with the ability to collect your data effectively and accurately. 
  • Consider Your Project’s Objectives: Tailor the sampling method to meet your specific objectives and constraints, such as M&E teams requiring real-time impact data and researchers needing representative samples for statistical analysis.

By adhering to these guidelines, you can make informed choices when selecting a sampling method, maximizing the quality and relevance of your data collection efforts.

5. Identify and train collectors

Not every data collection use case requires data collectors, but training individuals responsible for data collection becomes crucial in scenarios involving field presence.

The SurveyCTO platform supports both self-response survey modes and surveys that require a human field worker to do in-person interviews. Whether you’re hiring and training data collectors, utilizing an existing team, or training existing field staff, we offer comprehensive guidance and the right tools to ensure effective data collection practices.  

Here are some common training approaches for data collectors:

  • In-Class Training: Comprehensive sessions covering protocols, survey instruments, and best practices empower data collectors with skills and knowledge.
  • Tests and Assessments: Assessments evaluate collectors’ understanding and competence, highlighting areas where additional support is needed.
  • Mock Interviews: Simulated interviews refine collectors’ techniques and communication skills.
  • Pre-Recorded Training Sessions: Accessible reinforcement and self-paced learning to refresh and stay updated.

Training data collectors is vital for successful data collection techniques. Your training should focus on proper instrument usage and effective interaction with respondents, including communication skills, cultural literacy, and ethical considerations.

Remember, training is an ongoing process. Knowledge gaps and issues may arise in the field, necessitating further training.

Moving Ahead: Iterative Steps in Data Collection

A woman in a blazer sits at a desk reviewing paperwork in front of her laptop.

Once you’ve established the preliminary elements of your data collection process, you’re ready to start your data collection journey. In this section, we’ll delve into the specifics of designing and testing your instruments, collecting data, and organizing data while embracing the iterative nature of the data collection process, which requires diligent monitoring and making adjustments when needed.

6. Design and test your instruments

Designing effective data collection instruments like surveys and questionnaires is key. It’s crucial to prioritize respondent consent and privacy to ensure the integrity of your research. Thoughtful design and careful testing of survey questions are essential for optimizing research insights. Other critical considerations are: 

  • Clear and Unbiased Question Wording: Craft unambiguous, neutral questions free from bias to gather accurate and meaningful data. For example, instead of asking, “Shouldn’t we invest more into renewable energy that will combat the effects of climate change?” ask your question in a neutral way that allows the respondent to voice their thoughts. For example: “What are your thoughts on investing more in renewable energy?”
  • Logical Ordering and Appropriate Response Format: Arrange questions logically and choose response formats (such as multiple-choice, Likert scale, or open-ended) that suit the nature of the data you aim to collect.
  • Coverage of Relevant Topics: Ensure that your instrument covers all topics pertinent to your data collection goals while respecting cultural and social sensitivities. Make sure your instrument avoids assumptions, stereotypes, and languages or topics that could be considered offensive or taboo in certain contexts. The goal is to avoid marginalizing or offending respondents based on their social or cultural background.
  • Collect Only Necessary Data: Design survey instruments that focus solely on gathering the data required for your research objectives, avoiding unnecessary information.
  • Language(s) of the Respondent Population: Tailor your instruments to accommodate the languages your target respondents speak, offering translated versions if needed. Similarly, take into account accessibility for respondents who can’t read by offering alternative formats like images in place of text.
  • Desired Length of Time for Completion: Respect respondents’ time by designing instruments that can be completed within a reasonable timeframe, balancing thoroughness with engagement. Having a general timeframe for the amount of time needed to complete a response will also help you weed out bad responses. For example, a response that was rushed and completed outside of your response timeframe could indicate a response that needs to be excluded.
  • Collecting and Documenting Respondents’ Consent and Privacy: Ensure a robust consent process, transparent data usage communication, and privacy protection throughout data collection.

Perform Cognitive Interviewing

Cognitive interviewing is a method used to refine survey instruments and improve the accuracy of survey responses by evaluating how respondents understand, process, and respond to the instrument’s questions. In practice, cognitive interviewing involves an interview with the respondent, asking them to verbalize their thoughts as they interact with the instrument. By actively probing and observing their responses, you can identify and address ambiguities, ensuring accurate data collection.  

Thoughtful question wording, well-organized response options, and logical sequencing enhance comprehension, minimize biases, and ensure accurate data collection. Iterative testing and refinement based on respondent feedback improve the validity, reliability, and actionability of insights obtained.

Put Your Instrument to the Test

Through rigorous testing, you can uncover flaws, ensure reliability, maximize accuracy, and validate your instrument’s performance. This can be achieved by:

  • Conducting pilot testing to enhance the reliability and effectiveness of data collection. Administer the instrument, identify difficulties, gather feedback, and assess performance in real-world conditions.
  • Making revisions based on pilot testing to enhance clarity, accuracy, usability, and participant satisfaction. Refine questions, instructions, and format for effective data collection.
  • Continuously iterating and refining your instrument based on feedback and real-world testing. This ensures reliable, accurate, and audience-aligned methods of data collection. Additionally, this ensures your instrument adapts to changes, incorporates insights, and maintains ongoing effectiveness.

7. Collect your data

Now that you have your well-designed survey, interview questions, observation plan, or form, it’s time to implement it and gather the needed data. Data collection is not a one-and-done deal; it’s an ongoing process that demands attention to detail. Imagine spending weeks collecting data, only to discover later that a significant portion is unusable due to incomplete responses, improper collection methods, or falsified responses. To avoid such setbacks, adopt an iterative approach.

Leverage data collection tools with real-time monitoring to proactively identify outliers and issues. Take immediate action by fine-tuning your instruments, optimizing the data collection process, addressing concerns like additional training, or reevaluating personnel responsible for inaccurate data (for example, a field worker who sits in a coffee shop entering fake responses rather than doing the work of knocking on doors).

SurveyCTO’s Data Explorer was specifically designed to fulfill this requirement, empowering you to monitor incoming data, gain valuable insights, and know where changes may be needed. Embracing this iterative approach ensures ongoing improvement in data collection, resulting in more reliable and precise results.

8. Clean and organize your data

After data collection, the next step is to clean and organize the data to ensure its integrity and usability.

  • Data Cleaning: This stage involves sifting through your data to identify and rectify any errors, inconsistencies, or missing values. It’s essential to maintain the accuracy of your data and ensure that it’s reliable for further analysis. Data cleaning can uncover duplicates, outliers, and gaps that could skew your results if left unchecked. With real-time data monitoring , this continuous cleaning process keeps your data precise and current throughout the data collection period. Similarly, review and corrections workflows allow you to monitor the quality of your incoming data.
  • Organizing Your Data: Post-cleaning, it’s time to organize your data for efficient analysis and interpretation. Labeling your data using appropriate codes or categorizations can simplify navigation and streamline the extraction of insights. When you use a survey or form, labeling your data is often not necessary because you can design the instrument to collect in the right categories or return the right codes. An organized dataset is easier to manage, analyze, and interpret, ensuring that your collection efforts are not wasted but lead to valuable, actionable insights.

Remember, each stage of the data collection process, from design to cleaning, is iterative and interconnected. By diligently cleaning and organizing your data, you are setting the stage for robust, meaningful analysis that can inform your data-driven decisions and actions.

What happens after data collection?

A person sits at a laptop while using a large tablet to aggregate data into a graph.

The data collection journey takes us next into data analysis, where you’ll uncover patterns, empowering informed decision-making for researchers, evaluation teams, and field personnel.

Process and Analyze Your Data

Explore data through statistical and qualitative techniques to discover patterns, correlations, and insights during this pivotal stage. It’s about extracting the essence of your data and translating numbers into knowledge. Whether applying descriptive statistics, conducting regression analysis, or using thematic coding for qualitative data, this process drives decision-making and charts the path toward actionable outcomes.

Interpret and Report Your Results

Interpreting and reporting your data brings meaning and context to the numbers. Translating raw data into digestible insights for informed decision-making and effective stakeholder communication is critical.

The approach to interpretation and reporting varies depending on the perspective and role:

  • Researchers often lean heavily on statistical methods to identify trends, extract meaningful conclusions, and share their findings in academic circles, contributing to their knowledge pool.
  • M&E teams typically produce comprehensive reports, shedding light on the effectiveness and impact of programs. These reports guide internal and sometimes external stakeholders, supporting informed decisions and driving program improvements.

Field teams provide a first-hand perspective. Since they are often the first to see the results of the practical implementation of data, field teams are instrumental in providing immediate feedback loops on project initiatives. Field teams do the work that provides context to help research and M&E teams understand external factors like the local environment, cultural nuances, and logistical challenges that impact data results.

Safely store and handle data

Throughout the data collection process, and after it has been collected, it is vital to follow best practices for storing and handling data to ensure the integrity of your research. While the specifics of how to best store and handle data will depend on your project, here are some important guidelines to keep in mind:

  • Use cloud storage to hold your data if possible, since this is safer than storing data on hard drives and keeps it more accessible,
  • Periodically back up and purge old data from your system, since it’s safer to not retain data longer than necessary,
  • If you use mobile devices to collect and store data, use options for private, internal apps-specific storage if and when possible,
  • Restrict access to stored data to only those who need to work with that data.

Further considerations for data safety are discussed below in the section on data security .

Remember to uphold ethical standards in interpreting and reporting your data, regardless of your role. Clear communication, respectful handling of sensitive information, and adhering to confidentiality and privacy rights are all essential to fostering trust, promoting transparency, and bolstering your work’s credibility.

Common Data Collection Challenges

what is the data gathering procedure in research

Data collection is vital to data-driven initiatives, but it comes with challenges. Addressing common challenges such as poor data quality, privacy concerns, inadequate sample sizes, and bias is essential to ensure the collected data is reliable, trustworthy, and secure. 

In this section, we’ll explore three major challenges: data quality and consistency issues, data security concerns, and limitations with offline data collection , along with strategies to overcome them.

Data Quality and Consistency

Data quality and consistency refer to data accuracy and reliability throughout the collection and analysis process. 

Challenges such as incomplete or missing data, data entry errors, measurement errors, and data coding/categorization errors can impact the integrity and usefulness of the data. 

To navigate these complexities and maintain high standards, consistency, and integrity in the dataset:

  • Implement robust data validation processes, 
  • Ensure proper training for data entry personnel, 
  • Employ automated data validation techniques, and 
  • Conduct regular data quality audits.

Data security

Data security encompasses safeguarding data through ensuring data privacy and confidentiality, securing storage and backup, and controlling data sharing and access.

Challenges include the risk of potential breaches, unauthorized access, and the need to comply with data protection regulations.

To address these setbacks and maintain privacy, trust, and confidence during the data collection process: 

  • Use encryption and authentication methods, 
  • Implement robust security protocols, 
  • Update security measures regularly, 
  • Provide employee training on data security, and 
  • Adopt secure cloud storage solutions.

Offline Data Collection

Offline data collection refers to the process of gathering data using modes like mobile device-based computer-assisted personal interviewing (CAPI) when t here is an inconsistent or unreliable internet connection, and the data collection tool being used for CAPI has the functionality to work offline. 

Challenges associated with offline data collection include synchronization issues, difficulty transferring data, and compatibility problems between devices, and data collection tools. 

To overcome these challenges and enable efficient and reliable offline data collection processes, employ the following strategies: 

  • Leverage offline-enabled data collection apps or tools  that enable you to survey respondents even when there’s no internet connection, and upload data to a central repository at a later time. 
  • Your data collection plan should include times for periodic data synchronization when connectivity is available, 
  • Use offline, device-based storage for seamless data transfer and compatibility, and 
  • Provide clear instructions to field personnel on handling offline data collection scenarios.

Utilizing Technology in Data Collection

A group of people stand in a circle holding brightly colored smartphones.

Embracing technology throughout your data collection process can help you overcome many challenges described in the previous section. Data collection tools can streamline your data collection, improve the quality and security of your data, and facilitate the analysis of your data. Let’s look at two broad categories of tools that are essential for data collection:

Data Collection, Entry, & Management Tools

These tools help with data collection, input, and organization. They can range from digital survey platforms to comprehensive database systems, allowing you to gather, enter, and manage your data effectively. They can significantly simplify the data collection process, minimize human error, and offer practical ways to organize and manage large volumes of data. Some of these tools are:

  • Microsoft Office
  • Google Docs
  • SurveyMonkey
  • Google Forms

Data Analysis, Visualization, Reporting, & Workflow Tools

These tools assist in processing and interpreting the collected data. They provide a way to visualize data in a user-friendly format, making it easier to identify trends and patterns. These tools can also generate comprehensive reports to share your findings with stakeholders and help manage your workflow efficiently. By automating complex tasks, they can help ensure accuracy and save time. Tools for these purposes include:

  • Google sheets

Data collection tools like SurveyCTO often have integrations to help users seamlessly transition from data collection to data analysis, visualization, reporting, and managing workflows.

Master Your Data Collection Process With SurveyCTO

As we bring this guide to a close, you now possess a wealth of knowledge to develop your data collection process. From understanding the significance of setting clear goals to the crucial process of selecting your data collection methods and addressing common challenges, you are equipped to handle the intricate details of this dynamic process.

Remember, you’re not venturing into this complex process alone. At SurveyCTO, we offer not just a tool but an entire support system committed to your success. Beyond troubleshooting support, our success team serves as research advisors and expert partners, ready to provide guidance at every stage of your data collection journey.

With SurveyCTO , you can design flexible surveys in Microsoft Excel or Google Sheets, collect data online and offline with above-industry-standard security, monitor your data in real time, and effortlessly export it for further analysis in any tool of your choice. You also get access to our Data Explorer, which allows you to visualize incoming data at both individual survey and aggregate levels instantly.

In the iterative data collection process, our users tell us that SurveyCTO stands out with its capacity to establish review and correction workflows. It enables you to monitor incoming data and configure automated quality checks to flag error-prone submissions.

Finally, data security is of paramount importance to us. We ensure best-in-class security measures like SOC 2 compliance, end-to-end encryption, single sign-on (SSO), GDPR-compliant setups, customizable user roles, and self-hosting options to keep your data safe.

As you embark on your data collection journey, you can count on SurveyCTO’s experience and expertise to be by your side every step of the way. Our team would be excited and honored to be a part of your research project, offering you the tools and processes to gain informative insights and make effective decisions. Partner with us today and revolutionize the way you collect data.

Better data, better decision making, better world.

what is the data gathering procedure in research

INTEGRATIONS

  • Business Blog
  • Data Solutions

Data collection data gathering Future Processing

Data collection (data gathering): methods, benefits and best practices

We produce data on a daily basis – statistics say this year we will create 120 zettabytes of data and by 2025 the number will increase to 181 zettabytes. how to make sure data we gather is relevant, important and used in the right way, data collection: definition and introduction.

Before we dive into details, let’s look at some definitions.

Data collection refers to the process of gathering and acquiring information, facts, or observations from various sources, in a systematic and organised manner. The collected data can be used for various purposes, such as research, analysis, decision-making, and problem-solving.

In today’s digital age, data collection has become increasingly prevalent and crucial, as it enables organisations and individuals to gain insights and make informed choices based on empirical evidence.

1_Data collection Future Processing definition

The role of data collection in business decision making

Data collection plays a central role in business decision-making by providing the necessary information and insights for organisations to make informed choices and formulate strategies.

In the modern business landscape, where data is abundant, businesses that can effectively collect, analyse, and interpret data have a significant competitive advantage.

Optimise your data collection and get accurate insights!

Some key ways data collection impacts business decision-making processes include:

Understanding Customer Behaviour

Data collection allows businesses to gather information about their customers’ preferences, purchasing behaviour, and demographics. By analysing this data, businesses can identify trends and patterns, enabling them to tailor their products, services, and marketing strategies to better meet customer needs.

Market Analysis and Competitor Intelligence

Data collection helps businesses gain insights into market trends, industry performance, and competitor strategies. Analysing market data can help identify new opportunities, potential threats, and areas where a company can differentiate itself from competitors.

6 ipsychtec

Product Development and Improvement

Through data collection, businesses can gather feedback from customers about their existing products and services. This feedback can be used to make improvements, address issues, and develop new offerings that align with customer preferences.

Optimising Operations and Processes

Data collection can be applied to internal operations, supply chain management, and production processes. Analysing operational data can lead to efficiency, cost reductions, and streamlined workflows, ultimately improving the overall performance of the business.

If you want to learn more about the advantages of using data in business, also read other texts written by our experts:

  • Data Transformation: the complete guide for effective data management
  • Data modelling: a guide to techniques, models and best practices
  • Data Cleaning: benefits, process and best practices

Risk Management

Data collection and analysis help businesses assess potential risks and vulnerabilities. By monitoring key performance indicators and relevant market data, companies can anticipate challenges and make proactive decisions to mitigate risks.

Financial Decision-Making

Financial data collection is crucial for budgeting, financial planning, and resource allocation. Accurate financial data enables organisations to make strategic decisions related to investments, pricing, and revenue management.

Employee Performance and Engagement

Data collection can extend to employee feedback, performance metrics, and engagement surveys. Understanding employee satisfaction and performance can lead to a more productive and motivated workforce.

Predictive Analytics and Forecasting

Data collection provides the foundation for predictive analytics , which involves using historical data to forecast future trends and outcomes. This capability helps businesses make proactive decisions rather than reacting to events after they occur.

Personalisation and Customer Experience

By collecting and analysing customer data, businesses can offer personalised experiences and targeted marketing campaigns, improving customer satisfaction and loyalty.

Compliance and Regulation

In industries with strict regulatory requirements, data collection plays a vital role in ensuring compliance and meeting reporting obligations.

Data collection impacts business decision-making processes Future Processing

The types of data collection: primary and secondary data gathering

Data collection can take many forms, including primary and secondary data gathering.

Here is an overview of how they differ:

  • Primary Data Collection involves gathering original data directly from the source. Researchers or data collectors interact with individuals or entities to collect information through methods like surveys, interviews, questionnaires, observations, or experiments.
  • Secondary Data Collection involves using data that has already been collected by others. This data can come from a wide range of sources, such as government agencies, research institutions, public databases, or other existing datasets. Analysing and utilising secondary data can save time and resources but might be less tailored to the specific needs of the current study.

Quantitative vs qualitative data gathering

Other types of data collection are called quantitative and qualitative data gathering methods. This is what they involve:

  • Qualitative Data Collection focuses on obtaining non-numeric data, often used in social sciences, humanities, and other fields where understanding context, behaviours, and opinions is essential. Qualitative data can be collected through interviews, focus groups, content analysis, and more.
  • Quantitative Data Collection focuses on gathering numeric data that can be analysed statistically. Examples are surveys, experiments, structured observations, and sensor data.

In-depth examination of various data collection methods

Data collection methods can vary based on the nature of the data being sought, the research objectives, available resources, and the target population.

Let’s look at some data collection methods in use:

Surveys and Questionnaires

Surveys and questionnaires involve gathering information from a sample of individuals through a set of structured questions. They can be conducted on paper, via online questionnaires, telephone interviews, or face-to-face interviews.

They are efficient in collecting data from a large number of respondents and provide standardised responses for easy analysis. It’s worth remembering though that the wording and framing of survey questions can influence responses, and response rates may be affected by survey fatigue.

IT Insights InsurTalk: Vendor Perspectives on Market Evolution with Sharon Stanley (GPM)

Interviews and Focus Groups

Interviews involve direct one-on-one or group interactions with participants to gather qualitative or quantitative data. Interviews can be structured, semi-structured, or unstructured, depending on the level of flexibility needed. They allow for in-depth exploration of topics and offer opportunities to clarify responses and probe deeper into participants’ perspectives.

On the other hand, they can be time-consuming, and the presence of the interviewer may introduce bias.

Observations and Fieldwork

Observational data collection involves systematically watching and recording behaviours, events, or interactions in a natural setting. Observations provide firsthand, real-time data and are useful for studying behaviours or phenomena in their natural context.

They can however be influenced by the observer’s bias, and certain behaviours may be difficult to capture unobtrusively.

Experimental Data Collection

Experimental data collection involve manipulating one or more variables to observe their effect on the outcome of interest. They are often conducted in controlled settings.

Such experiments establish cause-and-effect relationships and allow researchers to control extraneous variables, but they may not fully capture real-world complexities, and ethical considerations must be taken into account when manipulating variables.

Document Review

Document review involves the systematic examination and analysis of existing documents, records, or artifacts to extract relevant information. It is cost-effective, time-saving and non-invasive.

It’s important to remember though that the accuracy, reliability, and completeness of the data depend on the quality and credibility of the source documents.

Looking for information on how to automate certain business processes?

  • Automated Data Processing (ADP): a tool for scalability and growth
  • Data automation for business growth: everything you need to know

Probability Sampling

Probability sampling is used to select a representative sample from a larger population and involves random selection, ensuring that every element in the population has an equal probability of being chosen. Its advantages include generalisability, statistical inference and reduced bias.

Yet it’s worth remembering that implementing probability sampling can be more challenging and time-consuming compared to non-probability sampling methods.

Data collection methods Future Processing

Consequences of poor data collection: the hidden risks

Poor data collection can have far-reaching consequences such as flawed decision-making, compromised insights, and potentially damaging outcomes.

Let’s look at them in more detail:

  • Inaccurate analysis and decisions – If data collection is flawed or incomplete, the insights derived from the data will be inaccurate or misleading. Businesses may make ill-informed decisions that could lead to financial losses, missed opportunities, or ineffective strategies.
  • Biased results – Poor data collection can introduce bias into the data, either through the sampling process or the design of survey questions. Biased data can lead to unfair conclusions or discriminatory practices, affecting individuals or certain groups.
  • Missed opportunities and trends – Inadequate data collection may result in missing critical information and trends. Organisations might fail to identify emerging market opportunities, customer preferences, or potential threats, putting them at a competitive disadvantage.
  • Reputation damage – If data collected is mishandled, misused, or exposed due to inadequate security measures, it can lead to a breach of trust with customers, partners, or the public. This can damage an organisation’s reputation and result in a loss of customer loyalty.
  • Wasted resources – Poor data collection can lead to the collection of irrelevant or duplicate data. This wastes time, effort, and resources that could have been better allocated elsewhere.

To mitigate those consequences, it is essential to prioritise data quality, establish rigorous data collection procedures, and invest in data management systems and technologies.

Request for Proposal (RFP) for Data Solutions

Download our comprehensive tool for data leaders, data collection best practices: nailing it right.

As we discussed above, data collection is a critical process that lays the foundation for accurate analysis and informed decision-making.

To ensure success and maintain the integrity of data collection, several best practices should be followed:

Ensuring Accuracy in Data Collection

Ensuring accuracy in data collection is crucial to obtaining reliable and trustworthy information for analysis and decision-making.

Key practices to achieve that include clearly defining research objectives, using valid and reliable data collection instruments, following standardised data collection methods, ensuring clarity and precision throughout the process and monitoring the system regularly to ensure errors are detected early on.

Maintaining Ethical Standards in Data Collection

Maintaining ethical standards in data collection is essential to protect the rights and well-being of individuals and to ensure the integrity and trustworthiness of research and business practices.

Key principles and practices to adhere include:

  • obtaining informed consent from all participants before collecting their data,
  • protecting participants by ensuring their personal information is kept confidential and secure,
  • collecting only the data necessary to address the research objectives,
  • having respect for vulnerable populations and respecting culture and social norms.

For more information on business ethics:

  • Code of ethics in business: benefits, importance and its impact
  • How can a strong set of work ethics ensure solid software delivery?

Data Privacy and Security: A Non-negotiable Aspect

Data privacy and security are non-negotiable aspects in all data collection methods. To achieve them, remember to protect individual rights, build trust, mitigate data breach risks, comply with regulations and safeguard sensitive information.

Implementing strong security measures, obtaining informed consent, conducting data protection impact assessments. and staying up-to-date with data protection regulations. These efforts not only protect individuals’ rights but also contribute to a more trustworthy and responsible data-driven society.

Read more about this topic:

  • GDPR Compliance: Essential Principles and Tips for Your Business
  • Data security and privacy: the 50+ generation’s biggest concern

Overcoming challenges in data collection: create an effective strategy

Creating an effective data collection strategy involves careful planning, consideration of potential challenges, and the implementation of solutions to overcome them.

Here’s our step-by-step guide to developing one :

  • Define clear objectives : Start by clearly defining your research or business objectives. Understand what data you need to collect, why you need it, and how it will be used to achieve your goals.
  • Choose appropriate data collection methods : Select data collection methods that align with your objectives and the nature of the data you need. Ensure that the chosen methods are suitable for the target population and are likely to give reliable results.
  • Design data collection instruments : If applicable, design data collection instruments such as surveys, questionnaires, or interview guides. Ensure that they are clear, unbiased, and relevant to your research objectives.
  • Pilot test the instruments : Before full deployment, pilot test your data collection instruments with a small group of participants to identify and address any issues, ambiguities, or errors.
  • Address sampling challenges : If your data collection involves sampling, carefully address potential sampling challenges. Use probability sampling when possible to ensure representativeness, and pay attention to issues like non-response bias or sample size.
  • Train data collectors : If data collection involves human interaction, provide comprehensive training to data collectors to ensure consistency and standardisation of the data .
  • Establish data privacy and security protocols : Implement robust data classification measures to protect participant information and ensure compliance with relevant data protection laws. Establish secure data storage and access controls.
  • Minimise non-sampling errors : Identify and minimise non-sampling errors, which can occur during data entry, data recording, or data processing. Conduct regular data quality checks to ensure accuracy.
  • Anticipate and address data collection challenges : Identify potential challenges that could arise during data collection, such as low response rates, uncooperative participants, or incomplete data. Develop strategies to address these challenges proactively.
  • Monitor data collection progress : Regularly monitor the progress of data collection to ensure it is on track and meeting the objectives. Be prepared to make adjustments if needed.
  • Maintain clear communication : Communicate with stakeholders and participants clearly and transparently about the process, its purpose, and the importance of their participation.
  • Record detailed documentation : Keep detailed documentation of the data collection process, including any modifications, issues encountered, and how they were resolved.
  • Plan for data analysis and utilisation : Consider how the collected data will be analysed and utilised to achieve the research or business objectives. Ensure that the data is relevant and sufficient for your analytical needs.
  • Evaluate and improve : After data collection is complete, evaluate the effectiveness of your data collection strategy and identify areas for improvement in future projects.

A good data strategy is always key. But if you are keen to do it right, you may need to work with specialists, experienced in this kind of projects.

At Future Processing we offer several data solutions that may have huge impact on your business. Get in touch with our team to see how you can make the most of your information assets and take your organisation to the next level.

Data_Solutions_Consulting_Future_Processing

Data Science and Engineering

Process data, base business decisions on knowledge and improve your day-to-day operations.

Discover similar posts

USA-blogpost-062024_dataquality_cover

6 data quality dimensions: a comprehensive overview

what is the data gathering procedure in research

Data readiness assessment: checklist of 6 key elements

what is the data gathering procedure in research

A guide to data profiling: tools, techniques, benefits and examples

what is the data gathering procedure in research

© Future Processing . All rights reserved.

  • Privacy policy

quantilope logo

5 Methods of Data Collection for Quantitative Research

In this blog, read up on five different ways to approach data collection for quantitative studies - online surveys, offline surveys, interviews, etc.

mrx glossary quantitative data collection

Jan 29, 2024

quantilope is the Consumer Intelligence Platform for all end-to-end research needs

In this blog, read up on five different data collection techniques for quantitative research studies. 

Quantitative research forms the basis for many business decisions. But what is quantitative data collection, why is it important, and which data collection methods are used in quantitative research? 

Table of Contents: 

  • What is quantitative data collection?
  • The importance of quantitative data collection
  • Methods used for quantitative data collection
  • Example of a survey showing quantitative data
  • Strengths and weaknesses of quantitative data

What is quantitative data collection? 

Quantitative data collection is the gathering of numeric data that puts consumer insights into a quantifiable context. It typically involves a large number of respondents - large enough to extract statistically reliable findings that can be extrapolated to a larger population.

The actual data collection process for quantitative findings is typically done using a quantitative online questionnaire that asks respondents yes/no questions, ranking scales, rating matrices, and other quantitative question types. With these results, researchers can generate data charts to summarize the quantitative findings and generate easily digestible key takeaways. 

Back to Table of Contents

The importance of quantitative data collection 

Quantitative data collection can confirm or deny a brand's hypothesis, guide product development, tailor marketing materials, and much more. It provides brands with reliable information to make decisions off of (i.e. 86% like lemon-lime flavor or just 12% are interested in a cinnamon-scented hand soap). 

Compared to qualitative data collection, quantitative data allows for comparison between insights given higher base sizes which leads to the ability to have statistical significance. Brands can cut and analyze their dataset in a variety of ways, looking at their findings among different demographic groups, behavioral groups, and other ways of interest. It's also generally easier and quicker to collect quantitative data than it is to gather qualitative feedback, making it an important data collection tool for brands that need quick, reliable, concrete insights. 

In order to make justified business decisions from quantitative data, brands need to recruit a high-quality sample that's reflective of their true target market (one that's comprised of all ages/genders rather than an isolated group). For example, a study into usage and attitudes around orange juice might include consumers who buy and/or drink orange juice at a certain frequency or who buy a variety of orange juice brands from different outlets. 

Methods used for quantitative data collection 

So knowing what quantitative data collection is and why it's important , how does one go about researching a large, high-quality, representative sample ?

Below are five examples of how to conduct your study through various data collection methods : 

Online quantitative surveys 

Online surveys are a common and effective way of collecting data from a large number of people. They tend to be made up of closed-ended questions so that responses across the sample are comparable; however, a small number of open-ended questions can be included as well (i.e. questions that require a written response rather than a selection of answers in a close-ended list). Open-ended questions are helpful to gather actual language used by respondents on a certain issue or to collect feedback on a view that might not be shown in a set list of responses).

Online surveys are quick and easy to send out, typically done so through survey panels. They can also appear in pop-ups on websites or via a link embedded in social media. From the participant’s point of view, online surveys are convenient to complete and submit, using whichever device they prefer (mobile phone, tablet, or computer). Anonymity is also viewed as a positive: online survey software ensures respondents’ identities are kept completely confidential.

To gather respondents for online surveys, researchers have several options. Probability sampling is one route, where respondents are selected using a random selection method. As such, everyone within the population has an equal chance of getting selected to participate. 

There are four common types of probability sampling . 

  • Simple random sampling is the most straightforward approach, which involves randomly selecting individuals from the population without any specific criteria or grouping. 
  • Stratified random sampling  divides the population into subgroups (strata) and selects a random sample from each stratum. This is useful when a population includes subgroups that you want to be sure you cover in your research. 
  • Cluster sampling   divides the population into clusters and then randomly selects some of the clusters to sample in their entirety. This is useful when a population is geographically dispersed and it would be impossible to include everyone.
  • Systematic sampling  begins with a random starting point and then selects every nth member of the population after that point (i.e. every 15th respondent). 

Learn how to leverage AI to help generate your online quantitative survey inputs:

AI webinar

While online surveys are by far the most common way to collect quantitative data in today’s modern age, there are still some harder-to-reach respondents where other mediums can be beneficial; for example, those who aren’t tech-savvy or who don’t have a stable internet connection. For these audiences, offline surveys   may be needed.

Offline quantitative surveys

Offline surveys (though much rarer to come across these days) are a way of gathering respondent feedback without digital means. This could be something like postal questionnaires that are sent out to a sample population and asked to return the questionnaire by mail (like the Census) or telephone surveys where questions are asked of respondents over the phone. 

Offline surveys certainly take longer to collect data than online surveys and they can become expensive if the population is difficult to reach (requiring a higher incentive). As with online surveys, anonymity is protected, assuming the mail is not intercepted or lost.

Despite the major difference in data collection to an online survey approach, offline survey data is still reported on in an aggregated, numeric fashion. 

In-person interviews are another popular way of researching or polling a population. They can be thought of as a survey but in a verbal, in-person, or virtual face-to-face format. The online format of interviews is becoming more popular nowadays, as it is cheaper and logistically easier to organize than in-person face-to-face interviews, yet still allows the interviewer to see and hear from the respondent in their own words. 

Though many interviews are collected for qualitative research, interviews can also be leveraged quantitatively; like a phone survey, an interviewer runs through a survey with the respondent, asking mainly closed-ended questions (yes/no, multiple choice questions, or questions with rating scales that ask how strongly the respondent agrees with statements). The advantage of structured interviews is that the interviewer can pace the survey, making sure the respondent gives enough consideration to each question. It also adds a human touch, which can be more engaging for some respondents. On the other hand, for more sensitive issues, respondents may feel more inclined to complete a survey online for a greater sense of anonymity - so it all depends on your research questions, the survey topic, and the audience you're researching.

Observations

Observation studies in quantitative research are similar in nature to a qualitative ethnographic study (in which a researcher also observes consumers in their natural habitats), yet observation studies for quant research remain focused on the numbers - how many people do an action, how much of a product consumer pick up, etc.

For quantitative observations, researchers will record the number and types of people who do a certain action - such as choosing a specific product from a grocery shelf, speaking to a company representative at an event, or how many people pass through a certain area within a given timeframe. Observation studies are generally structured, with the observer asked to note behavior using set parameters. Structured observation means that the observer has to hone in on very specific behaviors, which can be quite nuanced. This requires the observer to use his/her own judgment about what type of behavior is being exhibited (e.g. reading labels on products before selecting them; considering different items before making the final choice; making a selection based on price).

Document reviews and secondary data sources

A fifth method of data collection for quantitative research is known as secondary research : reviewing existing research to see how it can contribute to understanding a new issue in question. This is in contrast to the primary research methods above, which is research that is specially commissioned and carried out for a research project. 

There are numerous secondary data sources that researchers can analyze such as  public records, government research, company databases, existing reports, paid-for research publications, magazines, journals, case studies, websites, books, and more.

Aside from using secondary research alone, secondary research documents can also be used in anticipation of primary research, to understand which knowledge gaps need to be filled and to nail down the issues that might be important to explore further in a primary research study. Back to Table of Contents

Example of a survey showing quantitative data 

The below study shows what quantitative data might look like in a final study dashboard, taken from quantilope's Sneaker category insights study . 

The study includes a variety of usage and attitude metrics around sneaker wear, sneaker purchases, seasonality of sneakers, and more. Check out some of the data charts below showing these quantitative data findings - the first of which even cuts the quantitative data findings by demographics. 

sneaker study data chart

Beyond these basic usage and attitude (or, descriptive) data metrics, quantitative data also includes advanced methods - such as implicit association testing. See what these quantitative data charts look like from the same sneaker study below:

sneaker implicit chart

These are just a few examples of how a researcher or insights team might show their quantitative data findings. However, there are many ways to visualize quantitative data in an insights study, from bar charts, column charts, pie charts, donut charts, spider charts, and more, depending on what best suits the story your data is telling. Back to Table of Contents

Strengths and weaknesses of quantitative data collection

quantitative data is a great way to capture informative insights about your brand, product, category, or competitors. It's relatively quick, depending on your sample audience, and more affordable than other data collection methods such as qualitative focus groups. With quantitative panels, it's easy to access nearly any audience you might need - from something as general as the US population to something as specific as cannabis users . There are many ways to visualize quantitative findings, making it a customizable form of insights - whether you want to show the data in a bar chart, pie chart, etc. 

For those looking for quick, affordable, actionable insights, quantitative studies are the way to go.  

quantitative data collection, despite the many benefits outlined above, might also not be the right fit for your exact needs. For example, you often don't get as detailed and in-depth answers quantitatively as you would with an in-person interview, focus group, or ethnographic observation (all forms of qualitative research). When running a quantitative survey, it’s best practice to review your data for quality measures to ensure all respondents are ones you want to keep in your data set. Fortunately, there are a lot of precautions research providers can take to navigate these obstacles - such as automated data cleaners and data flags. Of course, the first step to ensuring high-quality results is to use a trusted panel provider.  Back to Table of Contents

Quantitative research typically needs to undergo statistical analysis for it to be useful and actionable to any business. It is therefore crucial that the method of data collection, sample size, and sample criteria are considered in light of the research questions asked.

quantilope’s online platform is ideal for quantitative research studies. The online format means a large sample can be reached easily and quickly through connected respondent panels that effectively reach the desired target audience. Response rates are high, as respondents can take their survey from anywhere, using any device with internet access.

Surveys are easy to build with quantilope’s online survey builder. Simply choose questions to include from pre-designed survey templates or build your own questions using the platform’s drag & drop functionality (of which both options are fully customizable). Once the survey is live, findings update in real-time so that brands can get an idea of consumer attitudes long before the survey is complete. In addition to basic usage and attitude questions, quantilope’s suite of advanced research methodologies provides an AI-driven approach to many types of research questions. These range from exploring the features of products that drive purchase through a Key Driver Analysis , compiling the ideal portfolio of products using a TURF , or identifying the optimal price point for a product or service using a Price Sensitivity Meter (PSM) .

Depending on the type of data sought it might be worth considering a mixed-method approach, including both qual and quant in a single research study. Alongside quantitative online surveys, quantilope’s video research solution - inColor , offers qualitative research in the form of videoed responses to survey questions. inColor’s qualitative data analysis includes an AI-drive read on respondent sentiment, keyword trends, and facial expressions.

To find out more about how quantilope can help with any aspect of your research design and to start conducting high-quality, quantitative research, get in touch below:

Get in touch to learn more about quantitative research studies!

Latest articles.

Unleash Brand Growth With Our Better Brand Health Tracking Certification

Unleash Brand Growth With Our Better Brand Health Tracking Certification

This Certification Course from the quantilope Academy empowers researchers to learn how to effectively grow their brand through Better Bran...

green background with black and white image of woman on laptop wearing headphones

August 15, 2024

Strengthening Our Approach to Data Quality With Pre-Survey Defense

Strengthening Our Approach to Data Quality With Pre-Survey Defense

quantilope's Pre-Survey Defense module detects suspicious/fraudulent respondents before they enter a survey - complementing existing data q...

green background with black and white image of person typing on laptop

August 13, 2024

Better Brand Health Tracking in the Body Wash Category

Better Brand Health Tracking in the Body Wash Category

Dive into quantilope's Better Brand Health Tracking approach with this body wash category study, including brands like Dove, Nivea, Axe, an...

green background with black and white image of soapy loofa in shower washing arm

August 06, 2024

what is the data gathering procedure in research

Chapter 3 Research Design and Methodology

  • October 2019
  • Thesis for: Master of Arts in Teaching

Kenneth De la Piedra at University of Perpetual Help System Jonelta Pueblo de Panay

  • University of Perpetual Help System Jonelta Pueblo de Panay

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Table of Contents

What is data collection, why do we need data collection, what are the different data collection methods, data collection tools, the importance of ensuring accurate and appropriate data collection, issues related to maintaining the integrity of data collection, what are common challenges in data collection, what are the key steps in the data collection process, data collection considerations and best practices, choose the right data science program, are you interested in a career in data science, what is data collection: methods, types, tools.

What is Data Collection? Definition, Types, Tools, and Techniques

The process of gathering and analyzing accurate data from various sources to find answers to research problems, trends and probabilities, etc., to evaluate possible outcomes is Known as Data Collection. Knowledge is power, information is knowledge, and data is information in digitized form, at least as defined in IT. Hence, data is power. But before you can leverage that data into a successful strategy for your organization or business, you need to gather it. That’s your first step.

So, to help you get the process started, we shine a spotlight on data collection. What exactly is it? Believe it or not, it’s more than just doing a Google search! Furthermore, what are the different types of data collection? And what kinds of data collection tools and data collection techniques exist?

If you want to get up to speed about what is data collection process, you’ve come to the right place. 

Transform raw data into captivating visuals with Simplilearn's hands-on Data Visualization Courses and captivate your audience. Also, master the art of data management with Simplilearn's comprehensive data management courses  - unlock new career opportunities today!

Data collection is the process of collecting and evaluating information or data from multiple sources to find answers to research problems, answer questions, evaluate outcomes, and forecast trends and probabilities. It is an essential phase in all types of research, analysis, and decision-making, including that done in the social sciences, business, and healthcare.

Accurate data collection is necessary to make informed business decisions, ensure quality assurance, and keep research integrity.

During data collection, the researchers must identify the data types, the sources of data, and what methods are being used. We will soon see that there are many different data collection methods . There is heavy reliance on data collection in research, commercial, and government fields.

Before an analyst begins collecting data, they must answer three questions first:

  • What’s the goal or purpose of this research?
  • What kinds of data are they planning on gathering?
  • What methods and procedures will be used to collect, store, and process the information?

Additionally, we can break up data into qualitative and quantitative types. Qualitative data covers descriptions such as color, size, quality, and appearance. Quantitative data, unsurprisingly, deals with numbers, such as statistics, poll numbers, percentages, etc.

Before a judge makes a ruling in a court case or a general creates a plan of attack, they must have as many relevant facts as possible. The best courses of action come from informed decisions, and information and data are synonymous.

The concept of data collection isn’t a new one, as we’ll see later, but the world has changed. There is far more data available today, and it exists in forms that were unheard of a century ago. The data collection process has had to change and grow with the times, keeping pace with technology.

Whether you’re in the world of academia, trying to conduct research, or part of the commercial sector, thinking of how to promote a new product, you need data collection to help you make better choices.

Now that you know what is data collection and why we need it, let's take a look at the different methods of data collection. While the phrase “data collection” may sound all high-tech and digital, it doesn’t necessarily entail things like computers, big data , and the internet. Data collection could mean a telephone survey, a mail-in comment card, or even some guy with a clipboard asking passersby some questions. But let’s see if we can sort the different data collection methods into a semblance of organized categories.

Primary and secondary methods of data collection are two approaches used to gather information for research or analysis purposes. Let's explore each data collection method in detail:

1. Primary Data Collection:

Primary data collection involves the collection of original data directly from the source or through direct interaction with the respondents. This method allows researchers to obtain firsthand information specifically tailored to their research objectives. There are various techniques for primary data collection, including:

a. Surveys and Questionnaires: Researchers design structured questionnaires or surveys to collect data from individuals or groups. These can be conducted through face-to-face interviews, telephone calls, mail, or online platforms.

b. Interviews: Interviews involve direct interaction between the researcher and the respondent. They can be conducted in person, over the phone, or through video conferencing. Interviews can be structured (with predefined questions), semi-structured (allowing flexibility), or unstructured (more conversational).

c. Observations: Researchers observe and record behaviors, actions, or events in their natural setting. This method is useful for gathering data on human behavior, interactions, or phenomena without direct intervention.

d. Experiments: Experimental studies involve the manipulation of variables to observe their impact on the outcome. Researchers control the conditions and collect data to draw conclusions about cause-and-effect relationships.

e. Focus Groups: Focus groups bring together a small group of individuals who discuss specific topics in a moderated setting. This method helps in understanding opinions, perceptions, and experiences shared by the participants.

2. Secondary Data Collection:

Secondary data collection involves using existing data collected by someone else for a purpose different from the original intent. Researchers analyze and interpret this data to extract relevant information. Secondary data can be obtained from various sources, including:

a. Published Sources: Researchers refer to books, academic journals, magazines, newspapers, government reports, and other published materials that contain relevant data.

b. Online Databases: Numerous online databases provide access to a wide range of secondary data, such as research articles, statistical information, economic data, and social surveys.

c. Government and Institutional Records: Government agencies, research institutions, and organizations often maintain databases or records that can be used for research purposes.

d. Publicly Available Data: Data shared by individuals, organizations, or communities on public platforms, websites, or social media can be accessed and utilized for research.

e. Past Research Studies: Previous research studies and their findings can serve as valuable secondary data sources. Researchers can review and analyze the data to gain insights or build upon existing knowledge.

Now that we’ve explained the various techniques, let’s narrow our focus even further by looking at some specific tools. For example, we mentioned interviews as a technique, but we can further break that down into different interview types (or “tools”).

Word Association

The researcher gives the respondent a set of words and asks them what comes to mind when they hear each word.

Sentence Completion

Researchers use sentence completion to understand what kind of ideas the respondent has. This tool involves giving an incomplete sentence and seeing how the interviewee finishes it.

Role-Playing

Respondents are presented with an imaginary situation and asked how they would act or react if it was real.

In-Person Surveys

The researcher asks questions in person.

Online/Web Surveys

These surveys are easy to accomplish, but some users may be unwilling to answer truthfully, if at all.

Mobile Surveys

These surveys take advantage of the increasing proliferation of mobile technology. Mobile collection surveys rely on mobile devices like tablets or smartphones to conduct surveys via SMS or mobile apps.

Phone Surveys

No researcher can call thousands of people at once, so they need a third party to handle the chore. However, many people have call screening and won’t answer.

Observation

Sometimes, the simplest method is the best. Researchers who make direct observations collect data quickly and easily, with little intrusion or third-party bias. Naturally, it’s only effective in small-scale situations.

Accurate data collecting is crucial to preserving the integrity of research, regardless of the subject of study or preferred method for defining data (quantitative, qualitative). Errors are less likely to occur when the right data gathering tools are used (whether they are brand-new ones, updated versions of them, or already available).

Among the effects of data collection done incorrectly, include the following -

  • Erroneous conclusions that squander resources
  • Decisions that compromise public policy
  • Incapacity to correctly respond to research inquiries
  • Bringing harm to participants who are humans or animals
  • Deceiving other researchers into pursuing futile research avenues
  • The study's inability to be replicated and validated

When these study findings are used to support recommendations for public policy, there is the potential to result in disproportionate harm, even if the degree of influence from flawed data collecting may vary by discipline and the type of investigation.

Let us now look at the various issues that we might face while maintaining the integrity of data collection.

In order to assist the errors detection process in the data gathering process, whether they were done purposefully (deliberate falsifications) or not, maintaining data integrity is the main justification (systematic or random errors).

Quality assurance and quality control are two strategies that help protect data integrity and guarantee the scientific validity of study results.

Each strategy is used at various stages of the research timeline:

  • Quality control - tasks that are performed both after and during data collecting
  • Quality assurance - events that happen before data gathering starts

Let us explore each of them in more detail now.

Quality Assurance

As data collecting comes before quality assurance, its primary goal is "prevention" (i.e., forestalling problems with data collection). The best way to protect the accuracy of data collection is through prevention. The uniformity of protocol created in the thorough and exhaustive procedures manual for data collecting serves as the best example of this proactive step. 

The likelihood of failing to spot issues and mistakes early in the research attempt increases when guides are written poorly. There are several ways to show these shortcomings:

  • Failure to determine the precise subjects and methods for retraining or training staff employees in data collecting
  • List of goods to be collected, in part
  • There isn't a system in place to track modifications to processes that may occur as the investigation continues.
  • Instead of detailed, step-by-step instructions on how to deliver tests, there is a vague description of the data gathering tools that will be employed.
  • Uncertainty regarding the date, procedure, and identity of the person or people in charge of examining the data
  • Incomprehensible guidelines for using, adjusting, and calibrating the data collection equipment.

Now, let us look at how to ensure Quality Control.

Become a Data Scientist With Real-World Experience

Become a Data Scientist With Real-World Experience

Quality Control

Despite the fact that quality control actions (detection/monitoring and intervention) take place both after and during data collection, the specifics should be meticulously detailed in the procedures manual. Establishing monitoring systems requires a specific communication structure, which is a prerequisite. Following the discovery of data collection problems, there should be no ambiguity regarding the information flow between the primary investigators and staff personnel. A poorly designed communication system promotes slack oversight and reduces opportunities for error detection.

Direct staff observation conference calls, during site visits, or frequent or routine assessments of data reports to spot discrepancies, excessive numbers, or invalid codes can all be used as forms of detection or monitoring. Site visits might not be appropriate for all disciplines. Still, without routine auditing of records, whether qualitative or quantitative, it will be challenging for investigators to confirm that data gathering is taking place in accordance with the manual's defined methods. Additionally, quality control determines the appropriate solutions, or "actions," to fix flawed data gathering procedures and reduce recurrences.

Problems with data collection, for instance, that call for immediate action include:

  • Fraud or misbehavior
  • Systematic mistakes, procedure violations 
  • Individual data items with errors
  • Issues with certain staff members or a site's performance 

Researchers are trained to include one or more secondary measures that can be used to verify the quality of information being obtained from the human subject in the social and behavioral sciences where primary data collection entails using human subjects. 

For instance, a researcher conducting a survey would be interested in learning more about the prevalence of risky behaviors among young adults as well as the social factors that influence these risky behaviors' propensity for and frequency. Let us now explore the common challenges with regard to data collection.

There are some prevalent challenges faced while collecting data, let us explore a few of them to understand them better and avoid them.

Data Quality Issues

The main threat to the broad and successful application of machine learning is poor data quality. Data quality must be your top priority if you want to make technologies like machine learning work for you. Let's talk about some of the most prevalent data quality problems in this blog article and how to fix them.

Inconsistent Data

When working with various data sources, it's conceivable that the same information will have discrepancies between sources. The differences could be in formats, units, or occasionally spellings. The introduction of inconsistent data might also occur during firm mergers or relocations. Inconsistencies in data have a tendency to accumulate and reduce the value of data if they are not continually resolved. Organizations that have heavily focused on data consistency do so because they only want reliable data to support their analytics.

Data Downtime

Data is the driving force behind the decisions and operations of data-driven businesses. However, there may be brief periods when their data is unreliable or not prepared. Customer complaints and subpar analytical outcomes are only two ways that this data unavailability can have a significant impact on businesses. A data engineer spends about 80% of their time updating, maintaining, and guaranteeing the integrity of the data pipeline. In order to ask the next business question, there is a high marginal cost due to the lengthy operational lead time from data capture to insight.

Schema modifications and migration problems are just two examples of the causes of data downtime. Data pipelines can be difficult due to their size and complexity. Data downtime must be continuously monitored, and it must be reduced through automation.

Ambiguous Data

Even with thorough oversight, some errors can still occur in massive databases or data lakes. For data streaming at a fast speed, the issue becomes more overwhelming. Spelling mistakes can go unnoticed, formatting difficulties can occur, and column heads might be deceptive. This unclear data might cause a number of problems for reporting and analytics.

Become a Data Science Expert & Get Your Dream Job

Become a Data Science Expert & Get Your Dream Job

Duplicate Data

Streaming data, local databases, and cloud data lakes are just a few of the sources of data that modern enterprises must contend with. They might also have application and system silos. These sources are likely to duplicate and overlap each other quite a bit. For instance, duplicate contact information has a substantial impact on customer experience. If certain prospects are ignored while others are engaged repeatedly, marketing campaigns suffer. The likelihood of biased analytical outcomes increases when duplicate data are present. It can also result in ML models with biased training data.

Too Much Data

While we emphasize data-driven analytics and its advantages, a data quality problem with excessive data exists. There is a risk of getting lost in an abundance of data when searching for information pertinent to your analytical efforts. Data scientists, data analysts, and business users devote 80% of their work to finding and organizing the appropriate data. With an increase in data volume, other problems with data quality become more serious, particularly when dealing with streaming data and big files or databases.

Inaccurate Data

For highly regulated businesses like healthcare, data accuracy is crucial. Given the current experience, it is more important than ever to increase the data quality for COVID-19 and later pandemics. Inaccurate information does not provide you with a true picture of the situation and cannot be used to plan the best course of action. Personalized customer experiences and marketing strategies underperform if your customer data is inaccurate.

Data inaccuracies can be attributed to a number of things, including data degradation, human mistake, and data drift. Worldwide data decay occurs at a rate of about 3% per month, which is quite concerning. Data integrity can be compromised while being transferred between different systems, and data quality might deteriorate with time.

Hidden Data

The majority of businesses only utilize a portion of their data, with the remainder sometimes being lost in data silos or discarded in data graveyards. For instance, the customer service team might not receive client data from sales, missing an opportunity to build more precise and comprehensive customer profiles. Missing out on possibilities to develop novel products, enhance services, and streamline procedures is caused by hidden data.

Finding Relevant Data

Finding relevant data is not so easy. There are several factors that we need to consider while trying to find relevant data, which include -

  • Relevant Domain
  • Relevant demographics
  • Relevant Time period and so many more factors that we need to consider while trying to find relevant data.

Data that is not relevant to our study in any of the factors render it obsolete and we cannot effectively proceed with its analysis. This could lead to incomplete research or analysis, re-collecting data again and again, or shutting down the study.

Deciding the Data to Collect

Determining what data to collect is one of the most important factors while collecting data and should be one of the first factors while collecting data. We must choose the subjects the data will cover, the sources we will be used to gather it, and the quantity of information we will require. Our responses to these queries will depend on our aims, or what we expect to achieve utilizing your data. As an illustration, we may choose to gather information on the categories of articles that website visitors between the ages of 20 and 50 most frequently access. We can also decide to compile data on the typical age of all the clients who made a purchase from your business over the previous month.

Not addressing this could lead to double work and collection of irrelevant data or ruining your study as a whole.

Dealing With Big Data

Big data refers to exceedingly massive data sets with more intricate and diversified structures. These traits typically result in increased challenges while storing, analyzing, and using additional methods of extracting results. Big data refers especially to data sets that are quite enormous or intricate that conventional data processing tools are insufficient. The overwhelming amount of data, both unstructured and structured, that a business faces on a daily basis. 

The amount of data produced by healthcare applications, the internet, social networking sites social, sensor networks, and many other businesses are rapidly growing as a result of recent technological advancements. Big data refers to the vast volume of data created from numerous sources in a variety of formats at extremely fast rates. Dealing with this kind of data is one of the many challenges of Data Collection and is a crucial step toward collecting effective data. 

Low Response and Other Research Issues

Poor design and low response rates were shown to be two issues with data collecting, particularly in health surveys that used questionnaires. This might lead to an insufficient or inadequate supply of data for the study. Creating an incentivized data collection program might be beneficial in this case to get more responses.

Now, let us look at the key steps in the data collection process.

In the Data Collection Process, there are 5 key steps. They are explained briefly below -

1. Decide What Data You Want to Gather

The first thing that we need to do is decide what information we want to gather. We must choose the subjects the data will cover, the sources we will use to gather it, and the quantity of information that we would require. For instance, we may choose to gather information on the categories of products that an average e-commerce website visitor between the ages of 30 and 45 most frequently searches for. 

2. Establish a Deadline for Data Collection

The process of creating a strategy for data collection can now begin. We should set a deadline for our data collection at the outset of our planning phase. Some forms of data we might want to continuously collect. We might want to build up a technique for tracking transactional data and website visitor statistics over the long term, for instance. However, we will track the data throughout a certain time frame if we are tracking it for a particular campaign. In these situations, we will have a schedule for when we will begin and finish gathering data. 

3. Select a Data Collection Approach

We will select the data collection technique that will serve as the foundation of our data gathering plan at this stage. We must take into account the type of information that we wish to gather, the time period during which we will receive it, and the other factors we decide on to choose the best gathering strategy.

4. Gather Information

Once our plan is complete, we can put our data collection plan into action and begin gathering data. In our DMP, we can store and arrange our data. We need to be careful to follow our plan and keep an eye on how it's doing. Especially if we are collecting data regularly, setting up a timetable for when we will be checking in on how our data gathering is going may be helpful. As circumstances alter and we learn new details, we might need to amend our plan.

5. Examine the Information and Apply Your Findings

It's time to examine our data and arrange our findings after we have gathered all of our information. The analysis stage is essential because it transforms unprocessed data into insightful knowledge that can be applied to better our marketing plans, goods, and business judgments. The analytics tools included in our DMP can be used to assist with this phase. We can put the discoveries to use to enhance our business once we have discovered the patterns and insights in our data.

Let us now look at some data collection considerations and best practices that one might follow.

We must carefully plan before spending time and money traveling to the field to gather data. While saving time and resources, effective data collection strategies can help us collect richer, more accurate, and richer data.

Below, we will be discussing some of the best practices that we can follow for the best results -

1. Take Into Account the Price of Each Extra Data Point

Once we have decided on the data we want to gather, we need to make sure to take the expense of doing so into account. Our surveyors and respondents will incur additional costs for each additional data point or survey question.

2. Plan How to Gather Each Data Piece

There is a dearth of freely accessible data. Sometimes the data is there, but we may not have access to it. For instance, unless we have a compelling cause, we cannot openly view another person's medical information. It could be challenging to measure several types of information.

Consider how time-consuming and difficult it will be to gather each piece of information while deciding what data to acquire.

3. Think About Your Choices for Data Collecting Using Mobile Devices

Mobile-based data collecting can be divided into three categories -

  • IVRS (interactive voice response technology) -  Will call the respondents and ask them questions that have already been recorded. 
  • SMS data collection - Will send a text message to the respondent, who can then respond to questions by text on their phone. 
  • Field surveyors - Can directly enter data into an interactive questionnaire while speaking to each respondent, thanks to smartphone apps.

We need to make sure to select the appropriate tool for our survey and responders because each one has its own disadvantages and advantages.

4. Carefully Consider the Data You Need to Gather

It's all too easy to get information about anything and everything, but it's crucial to only gather the information that we require. 

It is helpful to consider these 3 questions:

  • What details will be helpful?
  • What details are available?
  • What specific details do you require?

5. Remember to Consider Identifiers

Identifiers, or details describing the context and source of a survey response, are just as crucial as the information about the subject or program that we are actually researching.

In general, adding more identifiers will enable us to pinpoint our program's successes and failures with greater accuracy, but moderation is the key.

6. Data Collecting Through Mobile Devices is the Way to Go

Although collecting data on paper is still common, modern technology relies heavily on mobile devices. They enable us to gather many various types of data at relatively lower prices and are accurate as well as quick. There aren't many reasons not to pick mobile-based data collecting with the boom of low-cost Android devices that are available nowadays.

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

Are you thinking about pursuing a career in the field of data science? Simplilearn's Data Science courses are designed to provide you with the necessary skills and expertise to excel in this rapidly changing field. Here's a detailed comparison for your reference:

Program Name Data Scientist Master's Program Post Graduate Program In Data Science Post Graduate Program In Data Science Geo All Geos All Geos Not Applicable in US University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more 8+ skills including Exploratory Data Analysis, Descriptive Statistics, Inferential Statistics, and more 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more Additional Benefits Applied Learning via Capstone and 25+ Data Science Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

We live in the Data Age, and if you want a career that fully takes advantage of this, you should consider a career in data science. Simplilearn offers a Caltech Post Graduate Program in Data Science  that will train you in everything you need to know to secure the perfect position. This Data Science PG program is ideal for all working professionals, covering job-critical topics like R, Python programming , machine learning algorithms , NLP concepts , and data visualization with Tableau in great detail. This is all provided via our interactive learning model with live sessions by global practitioners, practical labs, and industry projects.

1. What is data collection with example?

Data collection is the process of collecting and analyzing information on relevant variables in a predetermined, methodical way so that one can respond to specific research questions, test hypotheses, and assess results. Data collection can be either qualitative or quantitative. Example: A company collects customer feedback through online surveys and social media monitoring to improve their products and services.

2. What are the primary data collection methods?

As is well known, gathering primary data is costly and time intensive. The main techniques for gathering data are observation, interviews, questionnaires, schedules, and surveys.

3. What are data collection tools?

The term "data collecting tools" refers to the tools/devices used to gather data, such as a paper questionnaire or a system for computer-assisted interviews. Tools used to gather data include case studies, checklists, interviews, occasionally observation, surveys, and questionnaires.

4. What’s the difference between quantitative and qualitative methods?

While qualitative research focuses on words and meanings, quantitative research deals with figures and statistics. You can systematically measure variables and test hypotheses using quantitative methods. You can delve deeper into ideas and experiences using qualitative methodologies.

5. What are quantitative data collection methods?

While there are numerous other ways to get quantitative information, the methods indicated above—probability sampling, interviews, questionnaire observation, and document review—are the most typical and frequently employed, whether collecting information offline or online.

6. What is mixed methods research?

User research that includes both qualitative and quantitative techniques is known as mixed methods research. For deeper user insights, mixed methods research combines insightful user data with useful statistics.

7. What are the benefits of collecting data?

Collecting data offers several benefits, including:

  • Knowledge and Insight
  • Evidence-Based Decision Making
  • Problem Identification and Solution
  • Validation and Evaluation
  • Identifying Trends and Predictions
  • Support for Research and Development
  • Policy Development
  • Quality Improvement
  • Personalization and Targeting
  • Knowledge Sharing and Collaboration

8. What’s the difference between reliability and validity?

Reliability is about consistency and stability, while validity is about accuracy and appropriateness. Reliability focuses on the consistency of results, while validity focuses on whether the results are actually measuring what they are intended to measure. Both reliability and validity are crucial considerations in research to ensure the trustworthiness and meaningfulness of the collected data and measurements.

Data Science & Business Analytics Courses Duration and Fees

Data Science & Business Analytics programs typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

11 Months€ 3,790

Cohort Starts:

11 months€ 2,290

Cohort Starts:

11 months€ 2,790

Cohort Starts:

3 Months€ 1,999

Cohort Starts:

32 weeks€ 1,790

Cohort Starts:

8 months€ 2,790
11 months€ 1,099
11 months€ 1,099

Recommended Reads

Data Science Career Guide: A Comprehensive Playbook To Becoming A Data Scientist

Difference Between Collection and Collections in Java

An Ultimate One-Stop Solution Guide to Collections in C# Programming With Examples

Managing Data

Capped Collection in MongoDB

What Are Java Collections and How to Implement Them?

Get Affiliated Certifications with Live Class programs

Data scientist.

  • Industry-recognized Data Scientist Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts

Caltech Data Sciences-Bootcamp

  • Exclusive visit to Caltech’s Robotics Lab

Caltech Post Graduate Program in Data Science

  • Earn a program completion certificate from Caltech CTME
  • Curriculum delivered in live online sessions by industry experts
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

IMAGES

  1. Data Gathering Procedure For Research Papers

    what is the data gathering procedure in research

  2. Data-Gathering-Procedure

    what is the data gathering procedure in research

  3. Research Output

    what is the data gathering procedure in research

  4. Data gathering procedure and output thesis proposal

    what is the data gathering procedure in research

  5. DATA GATHERING AND SAMPLE AND SAMPLING TECHNIQUES

    what is the data gathering procedure in research

  6. Data Gathering Procedure

    what is the data gathering procedure in research

VIDEO

  1. Data gathering and using google forms

  2. Dianne Kyla Rayos

  3. Data Gathering Procedure |Research Defense

  4. Techniques of Data Processing in Research/ Research Methodology

  5. How to File a Civil Case: Step-by-Step Procedure and Understanding the Law

  6. Find Out Why Thousands of Advisors Choose PreciseFP for Data Gathering in Just 55 Seconds!

COMMENTS

  1. Data Collection

    Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem. While methods and aims may differ between fields, the overall process of ...

  2. Step-by-Step Guide: Data Gathering in Research Projects

    Data gathering is a crucial step in any research or analysis process. It provides the foundation for informed decision-making , insightful analysis, and meaningful insights. Whether you're a data scientist, a market researcher, or just someone curious about a specific topic, understanding the steps involved in data gathering is essential.

  3. Data Collection in Research: Examples, Steps, and FAQs

    Data collection is the process of gathering information from various sources via different research methods and consolidating it into a single database or repository so researchers can use it for further analysis. Data collection aims to provide information that individuals, businesses, and organizations can use to solve problems, track progress, and make decisions.

  4. Data Collection

    Data collection is the process of gathering and collecting information from various sources to analyze and make informed decisions based on the data collected. This can involve various methods, such as surveys, interviews, experiments, and observation. In order for data collection to be effective, it is important to have a clear understanding ...

  5. (PDF) Data Collection Methods and Tools for Research; A Step-by-Step

    Data collection is the process of collecting data aiming to gain insights regarding the research topic. There are different types of data and different data collection methods accordingly.

  6. Data Gathering: A Comprehensive Guide

    Data analysis plays a crucial role in the­ research process. By dilige­ntly examining the collecte­d data, researchers acquire­ a profound understanding of the subject unde­r investigation.

  7. Data Collection: What It Is, Methods & Tools + Examples

    Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques. Put simply, data collection is the process of gathering information for a specific purpose.

  8. Data Collection Methods and Tools for Research; A Step-by-Step Guide to

    Data collection is the process of collecting data aiming to gain insights regarding the research topic. There are different types of data and different data collection methods accordingly. However, it may be challenging for researchers to select the most appropriate type of data collection based on the type of data that is used in the research.

  9. Ways to Conduct Data Gathering

    Data gathering is the first and most important step in the research process, regardless of the type of research being conducted. It entails collecting, measuring, and analyzing information about a specific subject and is used by businesses to make informed decisions.

  10. Data Collection

    Data collection is the process of gathering and measuring information used for research. Collecting data is one of the most important steps in the research process, and is part of all disciplines including physical and social sciences, humanities, business, etc. Data comes in many forms with different ways to store and record data, either written in a lab notebook and or recorded digitally on ...

  11. Design: Selection of Data Collection Methods

    In this Rip Out we focus on data collection, but in qualitative research, the entire project must be considered. 1, 2 Careful design of the data collection phase requires the following: deciding who will do what, where, when, and how at the different stages of the research process; acknowledging the role of the researcher as an instrument of ...

  12. PDF COLLECTING DATA IN MIXED METHODS RESEARCH

    The data collection procedure needs to fit the type of mixed methods design in the study. This ... In qualitative research, procedures need to be stated in detail, because the research often involves asking personal questions and collecting data in places where individuals live or work. The information collected from

  13. What Is Data Collection? A Guide for Aspiring Data Scientists

    Collecting data accurately is vital for making informed business decisions, ensuring quality assurance and maintaining research integrity. During the data collection process, researchers must identify the different data types, sources of data, and methods being employed since there are many different methods to collect data for analysis.

  14. Data Collection Methods

    Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental, or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

  15. What Data Gathering Strategies Should I Use?

    In this chapter, we review many of the data gathering strategies that can be used by postgraduates in social and behavioural research. We explore three major domains of data gathering strategies: strategies for connecting with people (encompassing interaction-based and observation-based strategies), exploring people's handiworks (encompassing participant-centred and artefact-based strategies ...

  16. Data Collection Methods: Types & Examples

    Data collection methods are techniques and procedures for gathering information for research purposes. They can range from simple self-reported surveys to more complex quantitative or qualitative experiments. Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis ...

  17. Data Collection Methods: A Comprehensive View

    Data collection is the first step in the data processing process. Data collection involves gathering information (raw data) from various sources such as interviews, surveys, questionnaires, etc. Data processing describes the steps taken to organize, manipulate and transform the collected data into a useful and meaningful resource.

  18. Guide to Data Collection Methods and Tools

    Surveys, interviews, observations, focus groups, and forms are common data collection methods. Sampling involves selecting a representative group from a larger population. Choosing the right sampling method to gather representative and relevant data is crucial. Crafting effective data collection instruments like surveys and questionnaires is ...

  19. Data collection (data gathering): methods, benefits and best practices

    Data collection: definition and introduction. Before we dive into details, let's look at some definitions. Data collection refers to the process of gathering and acquiring information, facts, or observations from various sources, in a systematic and organised manner.The collected data can be used for various purposes, such as research, analysis, decision-making, and problem-solving.

  20. 5 Methods of Data Collection for Quantitative Research

    The actual data collection process for quantitative findings is typically done using a quantitative online questionnaire that asks respondents yes/no questions, ranking scales, rating matrices, and other quantitative question types. With these results, researchers can generate data charts to summarize the quantitative findings and generate ...

  21. 7 Data Collection Methods & Tools For Research

    By definition, data reporting is the process of gathering and submitting data to be further subjected to analysis. The key aspect of data reporting is reporting accurate data because inaccurate data reporting leads to uninformed decision-making. Pros Informed decision-making. Easily accessible. Cons. Self-reported answers may be exaggerated.

  22. (PDF) Chapter 3 Research Design and Methodology

    Research Design and Methodology. Chapter 3 consists of three parts: (1) Purpose of the. study and research design, (2) Methods, and (3) Statistical. Data analysis procedure. Part one, Purpose of ...

  23. What Is Data Collection: Methods, Types, Tools

    Data collection is the process of collecting and evaluating information or data from multiple sources to find answers to research problems, answer questions, evaluate outcomes, and forecast trends and probabilities. It is an essential phase in all types of research, analysis, and decision-making, including that done in the social sciences ...