Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Content Analysis | A Step-by-Step Guide with Examples

Published on 5 May 2022 by Amy Luo . Revised on 5 December 2022.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers, and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorise or ‘code’ words, themes, and concepts within the texts and then analyse the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyse.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects, or concepts in a set of historical or contemporary texts.

In addition, content analysis can be used to make qualitative inferences by analysing the meaning and semantic relationship of words and concepts.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group, or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analysing the consequences of communication content, such as the flow of information or audience responses

Prevent plagiarism, run a free check.

  • Unobtrusive data collection

You can analyse communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost. All you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions.

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Next, you follow these five steps.

Step 1: Select the content you will analyse

Based on your research question, choose the texts that you will analyse. You need to decide:

  • The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)
  • The criteria for inclusion (e.g., newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small number of texts that meet your criteria, you might analyse all of them. If there is a large volume of texts, you can select a sample .

Step 2: Define the units and categories of analysis

Next, you need to determine the level at which you will analyse your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g., aged 30–40, lawyer, parent) or more conceptual (e.g., trustworthy, corrupt, conservative, family-oriented).

Step 3: Develop a set of rules for coding

Coding involves organising the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

Step 4: Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti , and Diction , which can help speed up the process of counting and categorising words and phrases.

Step 5: Analyse the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context, and audience of the texts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Luo, A. (2022, December 05). Content Analysis | A Step-by-Step Guide with Examples. Scribbr. Retrieved 26 August 2024, from https://www.scribbr.co.uk/research-methods/content-analysis-explained/

Is this article helpful?

Amy Luo

Other students also liked

How to do thematic analysis | guide & examples, data collection methods | step-by-step guide & examples, qualitative vs quantitative research | examples & methods.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Afr J Emerg Med
  • v.7(3); 2017 Sep

A hands-on guide to doing content analysis

Christen erlingsson.

a Department of Health and Caring Sciences, Linnaeus University, Kalmar 391 82, Sweden

Petra Brysiewicz

b School of Nursing & Public Health, University of KwaZulu-Natal, Durban 4041, South Africa

Associated Data

There is a growing recognition for the important role played by qualitative research and its usefulness in many fields, including the emergency care context in Africa. Novice qualitative researchers are often daunted by the prospect of qualitative data analysis and thus may experience much difficulty in the data analysis process. Our objective with this manuscript is to provide a practical hands-on example of qualitative content analysis to aid novice qualitative researchers in their task.

African relevance

  • • Qualitative research is useful to deepen the understanding of the human experience.
  • • Novice qualitative researchers may benefit from this hands-on guide to content analysis.
  • • Practical tips and data analysis templates are provided to assist in the analysis process.

Introduction

There is a growing recognition for the important role played by qualitative research and its usefulness in many fields, including emergency care research. An increasing number of health researchers are currently opting to use various qualitative research approaches in exploring and describing complex phenomena, providing textual accounts of individuals’ “life worlds”, and giving voice to vulnerable populations our patients so often represent. Many articles and books are available that describe qualitative research methods and provide overviews of content analysis procedures [1] , [2] , [3] , [4] , [5] , [6] , [7] , [8] , [9] , [10] . Some articles include step-by-step directions intended to clarify content analysis methodology. What we have found in our teaching experience is that these directions are indeed very useful. However, qualitative researchers, especially novice researchers, often struggle to understand what is happening on and between steps, i.e., how the steps are taken.

As research supervisors of postgraduate health professionals, we often meet students who present brilliant ideas for qualitative studies that have potential to fill current gaps in the literature. Typically, the suggested studies aim to explore human experience. Research questions exploring human experience are expediently studied through analysing textual data e.g., collected in individual interviews, focus groups, documents, or documented participant observation. When reflecting on the proposed study aim together with the student, we often suggest content analysis methodology as the best fit for the study and the student, especially the novice researcher. The interview data are collected and the content analysis adventure begins. Students soon realise that data based on human experiences are complex, multifaceted and often carry meaning on multiple levels.

For many novice researchers, analysing qualitative data is found to be unexpectedly challenging and time-consuming. As they soon discover, there is no step-wise analysis process that can be applied to the data like a pattern cutter at a textile factory. They may become extremely annoyed and frustrated during the hands-on enterprise of qualitative content analysis.

The novice researcher may lament, “I’ve read all the methodology but don’t really know how to start and exactly what to do with my data!” They grapple with qualitative research terms and concepts, for example; differences between meaning units, codes, categories and themes, and regarding increasing levels of abstraction from raw data to categories or themes. The content analysis adventure may now seem to be a chaotic undertaking. But, life is messy, complex and utterly fascinating. Experiencing chaos during analysis is normal. Good advice for the qualitative researcher is to be open to the complexity in the data and utilise one’s flow of creativity.

Inspired primarily by descriptions of “conventional content analysis” in Hsieh and Shannon [3] , “inductive content analysis” in Elo and Kyngäs [5] and “qualitative content analysis of an interview text” in Graneheim and Lundman [1] , we have written this paper to help the novice qualitative researcher navigate the uncertainty in-between the steps of qualitative content analysis. We will provide advice and practical tips, as well as data analysis templates, to attempt to ease frustration and hopefully, inspire readers to discover how this exciting methodology contributes to developing a deeper understanding of human experience and our professional contexts.

Overview of qualitative content analysis

Synopsis of content analysis.

A common starting point for qualitative content analysis is often transcribed interview texts. The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes is a process of further abstraction of data at each step of the analysis; from the manifest and literal content to latent meanings ( Fig. 1 and Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Example of analysis leading to higher levels of abstraction; from manifest to latent content.

Glossary of terms as used in this hands-on guide to doing content analysis. *

CondensationCondensation is a process of shortening the text while still preserving the core meaning
CodeA code can be thought of as a label; a name that most exactly describes what this particular condensed meaning unit is about. Usually one or two words long
CategoryA category is formed by grouping together those codes that are related to each other through their content or context. In other words, codes are organised into a category when they are describing different aspects, similarities or differences, of the text’s content that belong together
When analysis has led to a plethora of codes, it can be helpful to first assimilate smaller groups of closely related codes in sub-categories. Sub-categories related to each other through their content can then be grouped into categories
A category answers questions about , , , or ? In other words, categories are an expression of manifest content, i.e., what is visible and obvious in the data
Category names are factual and short
ThemeA theme can be seen as expressing an underlying meaning, i.e., latent content, found in two or more categories.
Themes are expressing data on an interpretative (latent) level. A theme answers questions such as , , , or ?
A theme is intended to communicate with the reader on both an intellectual and emotional level. Therefore poetic and metaphoric language is well suited in theme names to express underlying meaning
Theme names are very descriptive and include verbs, adverbs and adjectives

The initial step is to read and re-read the interviews to get a sense of the whole, i.e., to gain a general understanding of what your participants are talking about. At this point you may already start to get ideas of what the main points or ideas are that your participants are expressing. Then one needs to start dividing up the text into smaller parts, namely, into meaning units. One then condenses these meaning units further. While doing this, you need to ensure that the core meaning is still retained. The next step is to label condensed meaning units by formulating codes and then grouping these codes into categories. Depending on the study’s aim and quality of the collected data, one may choose categories as the highest level of abstraction for reporting results or you can go further and create themes [1] , [2] , [3] , [5] , [8] .

Content analysis as a reflective process

You must mould the clay of the data , tapping into your intuition while maintaining a reflective understanding of how your own previous knowledge is influencing your analysis, i.e., your pre-understanding. In qualitative methodology, it is imperative to vigilantly maintain an awareness of one’s pre-understanding so that this does not influence analysis and/or results. This is the difficult balancing task of keeping a firm grip on one’s assumptions, opinions, and personal beliefs, and not letting them unconsciously steer your analysis process while simultaneously, and knowingly, utilising one’s pre-understanding to facilitate a deeper understanding of the data.

Content analysis, as in all qualitative analysis, is a reflective process. There is no “step 1, 2, 3, done!” linear progression in the analysis. This means that identifying and condensing meaning units, coding, and categorising are not one-time events. It is a continuous process of coding and categorising then returning to the raw data to reflect on your initial analysis. Are you still satisfied with the length of meaning units? Do the condensed meaning units and codes still “fit” with each other? Do the codes still fit into this particular category? Typically, a fair amount of adjusting is needed after the first analysis endeavour. For example: a meaning unit might need to be split into two meaning units in order to capture an additional core meaning; a code modified to more closely match the core meaning of the condensed meaning unit; or a category name tweaked to most accurately describe the included codes. In other words, analysis is a flexible reflective process of working and re-working your data that reveals connections and relationships. Once condensed meaning units are coded it is easier to get a bigger picture and see patterns in your codes and organise codes in categories.

Content analysis exercise

The synopsis above is representative of analysis descriptions in many content analysis articles. Although correct, such method descriptions still do not provide much support for the novice researcher during the actual analysis process. Aspiring to provide guidance and direction to support the novice, a practical example of doing the actual work of content analysis is provided in the following sections. This practical example is based on a transcribed interview excerpt that was part of a study that aimed to explore patients’ experiences of being admitted into the emergency centre ( Fig. 2 ).

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

Excerpt from interview text exploring “Patient’s experience of being admitted into the emergency centre”

This content analysis exercise provides instructions, tips, and advice to support the content analysis novice in a) familiarising oneself with the data and the hermeneutic spiral, b) dividing up the text into meaning units and subsequently condensing these meaning units, c) formulating codes, and d) developing categories and themes.

Familiarising oneself with the data and the hermeneutic spiral

An important initial phase in the data analysis process is to read and re-read the transcribed interview while keeping your aim in focus. Write down your initial impressions. Embrace your intuition. What is the text talking about? What stands out? How did you react while reading the text? What message did the text leave you with? In this analysis phase, you are gaining a sense of the text as a whole.

You may ask why this is important. During analysis, you will be breaking down the whole text into smaller parts. Returning to your notes with your initial impressions will help you see if your “parts” analysis is matching up with your first impressions of the “whole” text. Are your initial impressions visible in your analysis of the parts? Perhaps you need to go back and check for different perspectives. This is what is referred to as the hermeneutic spiral or hermeneutic circle. It is the process of comparing the parts to the whole to determine whether impressions of the whole verify the analysis of the parts in all phases of analysis. Each part should reflect the whole and the whole should be reflected in each part. This concept will become clearer as you start working with your data.

Dividing up the text into meaning units and condensing meaning units

You have now read the interview a number of times. Keeping your research aim and question clearly in focus, divide up the text into meaning units. Located meaning units are then condensed further while keeping the central meaning intact ( Table 2 ). The condensation should be a shortened version of the same text that still conveys the essential message of the meaning unit. Sometimes the meaning unit is already so compact that no further condensation is required. Some content analysis sources warn researchers against short meaning units, claiming that this can lead to fragmentation [1] . However, our personal experience as research supervisors has shown us that a greater problem for the novice is basing analysis on meaning units that are too large and include many meanings which are then lost in the condensation process.

Suggestion for how the exemplar interview text can be divided into meaning units and condensed meaning units ( condensations are in parentheses ).

Meaning units (Condensations)
– Well, ok, where to start, that was a bad day in my life
– And it started so much the same as any other day. Right up until I was in that car crash!
– I still have nightmares about the sound of the other car and the lady screaming
– I can’t get the sound out of my head!
– it is a crazy place there. Do you know…do you work there?
– Well the people in the ambulance, when they had me in the ambulance they were looking worried, they kept telling me “there was lots of blood here”
– I really remember that. I thought, “Well there is not much I can do”
– Anyway, they seemed to want to get me into the EC in a real hurry. Then pushed my trolley in fast.
– I was feeling very cold. I think my legs were shaking.
– I think they had cut off my jeans. It was very uncomfortable,
– I wasn’t sure if the blanket covered me. I tried to grab the blanket with my hand.
– They must have given me something, maybe in that drip thing
– because I remember thinking that I should be in pain…. my legs must be sore… they were jammed in the car …but I really can’t remember feeling it
– just remember being cold, shaky
– feeling very alone (Feeling very alone)
– just saw everything moving past me
– I really wished my sister was there. She always seems to know what to do. She doesn’t panic,
– But there was no one.
– No one spoke to me.
– I wondered if I was invisible.
– They pushed me into a big room and there were lots of people there. It looked so busy, lots of noise, phones ringing, people talking loudly
– And I remember thinking that my sister wouldn’t know how to find me
– I tried to tell the ambulance guy that I needed him to please call my sister
– … but I had a thing on my face – for air they said before– so no one heard me, (
– No one seemed to be looking at my face.
– They pushed me into the middle of the room and then walked away. They just left me
– And I am not sure what everyone was doing
– They seemed to be rushing around
– … but no one spoke to me.
– Suddenly someone grabbed my leg,
– I got such a fright
– they didn’t say anything to me…
– just poked my leg.
– I remember screaming.
– I remember that pain!

Formulating codes

The next step is to develop codes that are descriptive labels for the condensed meaning units ( Table 3 ). Codes concisely describe the condensed meaning unit and are tools to help researchers reflect on the data in new ways. Codes make it easier to identify connections between meaning units. At this stage of analysis you are still keeping very close to your data with very limited interpretation of content. You may adjust, re-do, re-think, and re-code until you get to the point where you are satisfied that your choices are reasonable. Just as in the initial phase of getting to know your data as a whole, it is also good to write notes during coding on your impressions and reactions to the text.

Suggestions for coding of condensed meaning units.

Meaning units condensationsCodes
It was a bad day in my lifeThe crash
Ordinary day until the crashThe crash
Nightmares about the sounds of the crashThe crash
Can’t get the sound out of my headThe crash
Emergency Centre is a crazy placeEmergency Centre is crazy
Ambulance staff looked worried about all the bloodIn the ambulance
Ambulance staff were in a great hurry to get the trolley into ECStaff in a hurry
I feel cold and my legs are shakingCold and shaky
Jeans cut off and very uncomfortableFeeling exposed
Tried to grab the blanket to cover meFeeling exposed
Must have been given something in a dripIn the ambulance
Thinking I should be in pain but can’t remember feeling legs jammed in the carIn the ambulance
Being cold and shakyCold and shaky
Feeling very aloneFeeling alone
Only saw things moving past meEmergency Centre is busy
I wanted my sister who knows what to do and doesn’t panicWanting support
There was no oneFeeling alone
No one spoke to meNot spoken to
Was I invisibleFeeling invisible
A big, busy, noisy roomEmergency Centre is noisy
Tried to tell ambulance guy I needed him to call my sisterWanting help
With this thing on my face no one heard meNot heard
No one looked at my faceNot looked at
Pushed me to the middle of the room, walked away, left meLeft alone
I didn’t know what they were doingUnsure
They were rushing aboutStaff in a hurry
No one spoke to meNot spoken to
Suddenly someone grabbed my legStaff actions
I got a frightFrightened
Saying nothing to meNot spoken to
They poked my legStaff actions
I screamedPain
I remember the painPain

Developing categories and themes

The next step is to sort codes into categories that answer the questions who , what , when or where? One does this by comparing codes and appraising them to determine which codes seem to belong together, thereby forming a category. In other words, a category consists of codes that appear to deal with the same issue, i.e., manifest content visible in the data with limited interpretation on the part of the researcher. Category names are most often short and factual sounding.

In data that is rich with latent meaning, analysis can be carried on to create themes. In our practical example, we have continued the process of abstracting data to a higher level, from category to theme level, and developed three themes as well as an overarching theme ( Table 4 ). Themes express underlying meaning, i.e., latent content, and are formed by grouping two or more categories together. Themes are answering questions such as why , how , in what way or by what means? Therefore, theme names include verbs, adverbs and adjectives and are very descriptive or even poetic.

Suggestion for organisation of coded meaning units into categories and themes.

Overarching theme: THE EMERGENCY CENTRE THROUGH PATIENTS’ EYES – ALONE AND COLD IN CHAOS
CondensationsCodesCategories
It was a bad day in my lifeThe crashReliving the crash
Ordinary day until the crashThe crash
Nightmares about the sounds of the crashThe crash
Can’t get the sound out of my headThe crash
Ambulance staff looked worried about all the bloodIn the ambulanceReliving the rescue
Must have been given something in a dripIn the ambulance
Thinking I should be in pain but can’t remember feeling legs jammed in the carIn the ambulance


CondensationsCodesCategories
EC is a crazy placeEmergency Centre is crazyEmergency Centre is a crazy, noisy, environment
Only saw things moving past meEmergency Centre is busy
A big, busy noisy roomEmergency Centre is noisy
Ambulance staff were in a great hurry to get the trolley into ECStaff in a hurryStaff actions and non-actions
They were rushing aboutStaff in a hurry
Pushed me to the middle of the room, walked away, left meLeft alone
No one spoke to meNot spoken to
No one spoke to meNot spoken to
Saying nothing to meNot spoken to
Suddenly someone grabbed my legStaff actions
They poked my legStaff actions
No one looked at my faceNot looked at
With this thing on my face no one heard meNot heard
I wanted my sister who knows what to do and doesn’t panicWanting supportUnmet needs
Tried to tell ambulance guy I needed him to call my sisterWanting help


CondensationsCodesCategories
I feel cold and my legs are shakingCold and shakyPhysical responses
Being cold and shakyCold and shaky
I remember the painPain
I screamedPain
I couldn’t do anything about itFeeling helplessEmotional responses
Pants cut off and very uncomfortableFeeling exposed
Tried to grab the blanket to cover meFeeling exposed
Was I invisibleFeeling invisible
There was no one,Feeling alone
Feeling very aloneFeeling alone
I didn’t know what they were doingUnsure
Thinking my sister wouldn’t find meFeeling lost
I got a frightFrightened

Some reflections and helpful tips

Understand your pre-understandings.

While conducting qualitative research, it is paramount that the researcher maintains a vigilance of non-bias during analysis. In other words, did you remain aware of your pre-understandings, i.e., your own personal assumptions, professional background, and previous experiences and knowledge? For example, did you zero in on particular aspects of the interview on account of your profession (as an emergency doctor, emergency nurse, pre-hospital professional, etc.)? Did you assume the patient’s gender? Did your assumptions affect your analysis? How about aspects of culpability; did you assume that this patient was at fault or that this patient was a victim in the crash? Did this affect how you analysed the text?

Staying aware of one’s pre-understandings is exactly as difficult as it sounds. But, it is possible and it is requisite. Focus on putting yourself and your pre-understandings in a holding pattern while you approach your data with an openness and expectation of finding new perspectives. That is the key: expect the new and be prepared to be surprised. If something in your data feels unusual, is different from what you know, atypical, or even odd – don’t by-pass it as “wrong”. Your reactions and intuitive responses are letting you know that here is something to pay extra attention to, besides the more comfortable condensing and coding of more easily recognisable meaning units.

Use your intuition

Intuition is a great asset in qualitative analysis and not to be dismissed as “unscientific”. Intuition results from tacit knowledge. Just as tacit knowledge is a hallmark of great clinicians [11] , [12] ; it is also an invaluable tool in analysis work [13] . Literally, take note of your gut reactions and intuitive guidance and remember to write these down! These notes often form a framework of possible avenues for further analysis and are especially helpful as you lift the analysis to higher levels of abstraction; from meaning units to condensed meaning units, to codes, to categories and then to the highest level of abstraction in content analysis, themes.

Aspects of coding and categorising hard to place data

All too often, the novice gets overwhelmed by interview material that deals with the general subject matter of the interview, but doesn’t seem to answer the research question. Don’t be too quick to consider such text as off topic or dross [6] . There is often data that, although not seeming to match the study aim precisely, is still important for illuminating the problem area. This can be seen in our practical example about exploring patients’ experiences of being admitted into the emergency centre. Initially the participant is describing the accident itself. While not directly answering the research question, the description is important for understanding the context of the experience of being admitted into the emergency centre. It is very common that participants will “begin at the beginning” and prologue their narratives in order to create a context that sets the scene. This type of contextual data is vital for gaining a deepened understanding of participants’ experiences.

In our practical example, the participant begins by describing the crash and the rescue, i.e., experiences leading up to and prior to admission to the emergency centre. That is why we have chosen in our analysis to code the condensed meaning unit “Ambulance staff looked worried about all the blood” as “In the ambulance” and place it in the category “Reliving the rescue”. We did not choose to include this meaning unit in the categories specifically about admission to the emergency centre itself. Do you agree with our coding choice? Would you have chosen differently?

Another common problem for the novice is deciding how to code condensed meaning units when the unit can be labelled in several different ways. At this point researchers usually groan and wish they had thought to ask one of those classic follow-up questions like “Can you tell me a little bit more about that?” We have examples of two such coding conundrums in the exemplar, as can be seen in Table 3 (codes we conferred on) and Table 4 (codes we reached consensus on). Do you agree with our choices or would you have chosen different codes? Our best advice is to go back to your impressions of the whole and lean into your intuition when choosing codes that are most reasonable and best fit your data.

A typical problem area during categorisation, especially for the novice researcher, is overlap between content in more than one initial category, i.e., codes included in one category also seem to be a fit for another category. Overlap between initial categories is very likely an indication that the jump from code to category was too big, a problem not uncommon when the data is voluminous and/or very complex. In such cases, it can be helpful to first sort codes into narrower categories, so-called subcategories. Subcategories can then be reviewed for possibilities of further aggregation into categories. In the case of a problematic coding, it is advantageous to return to the meaning unit and check if the meaning unit itself fits the category or if you need to reconsider your preliminary coding.

It is not uncommon to be faced by thorny problems such as these during coding and categorisation. Here we would like to reiterate how valuable it is to have fellow researchers with whom you can discuss and reflect together with, in order to reach consensus on the best way forward in your data analysis. It is really advantageous to compare your analysis with meaning units, condensations, coding and categorisations done by another researcher on the same text. Have you identified the same meaning units? Do you agree on coding? See similar patterns in the data? Concur on categories? Sometimes referred to as “researcher triangulation,” this is actually a key element in qualitative analysis and an important component when striving to ensure trustworthiness in your study [14] . Qualitative research is about seeking out variations and not controlling variables, as in quantitative research. Collaborating with others during analysis lets you tap into multiple perspectives and often makes it easier to see variations in the data, thereby enhancing the quality of your results as well as contributing to the rigor of your study. It is important to note that it is not necessary to force consensus in the findings but one can embrace these variations in interpretation and use that to capture the richness in the data.

Yet there are times when neither openness, pre-understanding, intuition, nor researcher triangulation does the job; for example, when analysing an interview and one is simply confused on how to code certain meaning units. At such times, there are a variety of options. A good starting place is to re-read all the interviews through the lens of this specific issue and actively search for other similar types of meaning units you might have missed. Another way to handle this is to conduct further interviews with specific queries that hopefully shed light on the issue. A third option is to have a follow-up interview with the same person and ask them to explain.

Additional tips

It is important to remember that in a typical project there are several interviews to analyse. Codes found in a single interview serve as a starting point as you then work through the remaining interviews coding all material. Form your categories and themes when all project interviews have been coded.

When submitting an article with your study results, it is a good idea to create a table or figure providing a few key examples of how you progressed from the raw data of meaning units, to condensed meaning units, coding, categorisation, and, if included, themes. Providing such a table or figure supports the rigor of your study [1] and is an element greatly appreciated by reviewers and research consumers.

During the analysis process, it can be advantageous to write down your research aim and questions on a sheet of paper that you keep nearby as you work. Frequently referring to your aim can help you keep focused and on track during analysis. Many find it helpful to colour code their transcriptions and write notes in the margins.

Having access to qualitative analysis software can be greatly helpful in organising and retrieving analysed data. Just remember, a computer does not analyse the data. As Jennings [15] has stated, “… it is ‘peopleware,’ not software, that analyses.” A major drawback is that qualitative analysis software can be prohibitively expensive. One way forward is to use table templates such as we have used in this article. (Three analysis templates, Templates A, B, and C, are provided as supplementary online material ). Additionally, the “find” function in word processing programmes such as Microsoft Word (Redmond, WA USA) facilitates locating key words, e.g., in transcribed interviews, meaning units, and codes.

Lessons learnt/key points

From our experience with content analysis we have learnt a number of important lessons that may be useful for the novice researcher. They are:

  • • A method description is a guideline supporting analysis and trustworthiness. Don’t get caught up too rigidly following steps. Reflexivity and flexibility are just as important. Remember that a method description is a tool helping you in the process of making sense of your data by reducing a large amount of text to distil key results.
  • • It is important to maintain a vigilant awareness of one’s own pre-understandings in order to avoid bias during analysis and in results.
  • • Use and trust your own intuition during the analysis process.
  • • If possible, discuss and reflect together with other researchers who have analysed the same data. Be open and receptive to new perspectives.
  • • Understand that it is going to take time. Even if you are quite experienced, each set of data is different and all require time to analyse. Don’t expect to have all the data analysis done over a weekend. It may take weeks. You need time to think, reflect and then review your analysis.
  • • Keep reminding yourself how excited you have felt about this area of research and how interesting it is. Embrace it with enthusiasm!
  • • Let it be chaotic – have faith that some sense will start to surface. Don’t be afraid and think you will never get to the end – you will… eventually!

Peer review under responsibility of African Federation for Emergency Medicine.

Appendix A Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.afjem.2017.08.001 .

Appendix A. Supplementary data

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Content Analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.

Description

Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.

Three different definitions of content analysis are provided below.

Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)

Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).

Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)

Uses of Content Analysis

Identify the intentions, focus or communication trends of an individual, group or institution

Describe attitudinal and behavioral responses to communications

Determine the psychological or emotional state of persons or groups

Reveal international differences in communication content

Reveal patterns in communication content

Pre-test and improve an intervention or survey prior to launch

Analyze focus group interviews and open-ended questions to complement quantitative data

Types of Content Analysis

There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.

Conceptual Analysis

Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.

To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.

General steps for conducting a conceptual content analysis:

1. Decide the level of analysis: word, word sense, phrase, sentence, themes

2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.

Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.

Option B allows the researcher to stay focused and examine the data for specific concepts.

3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.

When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.

When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.

4. Decide on how you will distinguish among concepts:

Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.

What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.

5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.

6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?

7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.

8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.

Relational Analysis

Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.

To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.

There are three subcategories of relational analysis to choose from prior to going on to the general steps.

Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.

Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.

Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.

General steps for conducting a relational content analysis:

1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:

Strength of relationship: degree to which two or more concepts are related.

Sign of relationship: are concepts positively or negatively related to each other?

Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.

4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.

Reliability and Validity

Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:

Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.

Reproducibility: tendency for a group of coders to classify categories membership in the same way.

Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.

Validity : Three criteria comprise the validity of a content analysis:

Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.

Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.

Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.

Advantages of Content Analysis

Directly examines communication using text

Allows for both qualitative and quantitative analysis

Provides valuable historical and cultural insights over time

Allows a closeness to data

Coded form of the text can be statistically analyzed

Unobtrusive means of analyzing interactions

Provides insight into complex models of human thought and language use

When done well, is considered a relatively “exact” research method

Content analysis is a readily-understood and an inexpensive research method

A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.

Disadvantages of Content Analysis

Can be extremely time consuming

Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation

Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study

Is inherently reductive, particularly when dealing with complex texts

Tends too often to simply consist of word counts

Often disregards the context that produced the text, as well as the state of things after the text is produced

Can be difficult to automate or computerize

Textbooks & Chapters  

Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.

Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.

de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.

Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.

Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)

Methodological Articles  

Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.

Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.

Application Articles  

Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.

Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.

Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.

Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.

QSR NVivo:  http://www.qsrinternational.com/products.aspx

Atlas.ti:  http://www.atlasti.com/webinars.html

R- RQDA package:  http://rqda.r-forge.r-project.org/

Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU. Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .

As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.

At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.

Join the Conversation

Have a question about methods? Join us on Facebook

Reference management. Clean and simple.

How to do a content analysis

Content analysis illustration

What is content analysis?

Why would you use a content analysis, types of content analysis, conceptual content analysis, relational content analysis, reliability and validity, reliability, the advantages and disadvantages of content analysis, a step-by-step guide to conducting a content analysis, step 1: develop your research questions, step 2: choose the content you’ll analyze, step 3: identify your biases, step 4: define the units and categories of coding, step 5: develop a coding scheme, step 6: code the content, step 7: analyze the results, frequently asked questions about content analysis, related articles.

In research, content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. Simply put, content analysis is a research method that aims to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data , depending on the specific use case.

As such, some of the objectives of content analysis include:

  • Simplifying complex, unstructured content.
  • Identifying trends, patterns, and relationships in the content.
  • Determining the characteristics of the content.
  • Identifying the intentions of individuals through the analysis of the content.
  • Identifying the implied aspects in the content.

Typically, when doing a content analysis, you’ll gather data not only from written text sources like newspapers, books, journals, and magazines but also from a variety of other oral and visual sources of content like:

  • Voice recordings, speeches, and interviews.
  • Web content, blogs, and social media content.
  • Films, videos, and photographs.

One of content analysis’s distinguishing features is that you'll be able to gather data for research without physically gathering data from participants. In other words, when doing a content analysis, you don't need to interact with people directly.

The process of doing a content analysis usually involves categorizing or coding concepts, words, and themes within the content and analyzing the results. We’ll look at the process in more detail below.

Typically, you’ll use content analysis when you want to:

  • Identify the intentions, communication trends, or communication patterns of an individual, a group of people, or even an institution.
  • Analyze and describe the behavioral and attitudinal responses of individuals to communications.
  • Determine the emotional or psychological state of an individual or a group of people.
  • Analyze the international differences in communication content.
  • Analyzing audience responses to content.

Keep in mind, though, that these are just some examples of use cases where a content analysis might be appropriate and there are many others.

The key thing to remember is that content analysis will help you quantify the occurrence of specific words, phrases, themes, and concepts in content. Moreover, it can also be used when you want to make qualitative inferences out of the data by analyzing the semantic meanings and interrelationships between words, themes, and concepts.

In general, there are two types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions. With that in mind, let’s now look at these two types of content analysis in more detail.

With conceptual analysis, you’ll determine the existence of certain concepts within the content and identify their frequency. In other words, conceptual analysis involves the number of times a specific concept appears in the content.

Conceptual analysis is typically focused on explicit data, which means you’ll focus your analysis on a specific concept to identify its presence in the content and determine its frequency.

However, when conducting a content analysis, you can also use implicit data. This approach is more involved, complicated, and requires the use of a dictionary, contextual translation rules, or a combination of both.

No matter what type you use, conceptual analysis brings an element of quantitive analysis into a qualitative approach to research.

Relational content analysis takes conceptual analysis a step further. So, while the process starts in the same way by identifying concepts in content, it doesn’t focus on finding the frequency of these concepts, but rather on the relationships between the concepts, the context in which they appear in the content, and their interrelationships.

Before starting with a relational analysis, you’ll first need to decide on which subcategory of relational analysis you’ll use:

  • Affect extraction: With this relational content analysis approach, you’ll evaluate concepts based on their emotional attributes. You’ll typically assess these emotions on a rating scale with higher values assigned to positive emotions and lower values to negative ones. In turn, this allows you to capture the emotions of the writer or speaker at the time the content is created. The main difficulty with this approach is that emotions can differ over time and across populations.
  • Proximity analysis: With this approach, you’ll identify concepts as in conceptual analysis, but you’ll evaluate the way in which they occur together in the content. In other words, proximity analysis allows you to analyze the relationship between concepts and derive a concept matrix from which you’ll be able to develop meaning. Proximity analysis is typically used when you want to extract facts from the content rather than contextual, emotional, or cultural factors.
  • Cognitive mapping: Finally, cognitive mapping can be used with affect extraction or proximity analysis. It’s a visualization technique that allows you to create a model that represents the overall meaning of content and presents it as a graphic map of the relationships between concepts. As such, it’s also commonly used when analyzing the changes in meanings, definitions, and terms over time.

Now that we’ve seen what content analysis is and looked at the different types of content analysis, it’s important to understand how reliable it is as a research method . We’ll also look at what criteria impact the validity of a content analysis.

There are three criteria that determine the reliability of a content analysis:

  • Stability . Stability refers to the tendency of coders to consistently categorize or code the same data in the same way over time.
  • Reproducibility . This criterion refers to the tendency of coders to classify categories membership in the same way.
  • Accuracy . Accuracy refers to the extent to which the classification of content corresponds to a specific standard.

Keep in mind, though, that because you’ll need to code or categorize the concepts you’ll aim to identify and analyze manually, you’ll never be able to eliminate human error. However, you’ll be able to minimize it.

In turn, three criteria determine the validity of a content analysis:

  • Closeness of categories . This is achieved by using multiple classifiers to get an agreed-upon definition for a specific category by using either implicit variables or synonyms. In this way, the category can be broadened to include more relevant data.
  • Conclusions . Here, it’s crucial to decide what level of implication will be allowable. In other words, it’s important to consider whether the conclusions are valid based on the data or whether they can be explained using some other phenomena.
  • Generalizability of the results of the analysis to a theory . Generalizability comes down to how you determine your categories as mentioned above and how reliable those categories are. In turn, this relies on how accurately the categories are at measuring the concepts or ideas that you’re looking to measure.

Considering everything mentioned above, there are definite advantages and disadvantages when it comes to content analysis:

AdvantagesDisadvantages

It doesn’t require physical interaction with any participant, or, in other words, it’s unobtrusive. This means that the presence of a researcher is unlikely to influence the results. As a result, there are also fewer ethical concerns compared to some other analysis methods.

It always involves an element of subjective interpretation. In many cases, it’s criticized for being too subjective and not scientifically rigorous enough. Fortunately, when applying the criteria of reliability and validity, researchers can produce accurate results with content analysis.

It uses a systematic and transparent approach to gathering data. When done correctly, content analysis is easily repeatable by other researchers, which, in turn, leads to more reliable results.

It’s inherently reductive. In other words, by focusing only on specific concepts, words, or themes, researchers will often disregard any context, nuances, or deeper meaning to the content.

Because researchers are able to conduct content analysis in any location, at any time, and at a lower cost compared to many other analysis methods, it’s typically more flexible.

Although it offers researchers an inexpensive and flexible approach to gathering and analyzing data, coding or categorizing a large number of concepts is time-consuming.

It allows researchers to effectively combine quantitative and qualitative analysis into one approach, which then results in a more rigorous scientific analysis of the data.

Coding can be challenging to automate, which means the process largely relies on manual processes.

Let’s now look at the steps you’ll need to follow when doing a content analysis.

The first step will always be to formulate your research questions. This is simply because, without clear and defined research questions, you won’t know what question to answer and, by implication, won’t be able to code your concepts.

Based on your research questions, you’ll then need to decide what content you’ll analyze. Here, you’ll use three factors to find the right content:

  • The type of content . Here you’ll need to consider the various types of content you’ll use and their medium like, for example, blog posts, social media, newspapers, or online articles.
  • What criteria you’ll use for inclusion . Here you’ll decide what criteria you’ll use to include content. This can, for instance, be the mentioning of a certain event or advertising a specific product.
  • Your parameters . Here, you’ll decide what content you’ll include based on specified parameters in terms of date and location.

The next step is to consider your own pre-conception of the questions and identify your biases. This process is referred to as bracketing and allows you to be aware of your biases before you start your research with the result that they’ll be less likely to influence the analysis.

Your next step would be to define the units of meaning that you’ll code. This will, for example, be the number of times a concept appears in the content or the treatment of concept, words, or themes in the content. You’ll then need to define the set of categories you’ll use for coding which can be either objective or more conceptual.

Based on the above, you’ll then organize the units of meaning into your defined categories. Apart from this, your coding scheme will also determine how you’ll analyze the data.

The next step is to code the content. During this process, you’ll work through the content and record the data according to your coding scheme. It’s also here where conceptual and relational analysis starts to deviate in relation to the process you’ll need to follow.

As mentioned earlier, conceptual analysis aims to identify the number of times a specific concept, idea, word, or phrase appears in the content. So, here, you’ll need to decide what level of analysis you’ll implement.

In contrast, with relational analysis, you’ll need to decide what type of relational analysis you’ll use. So, you’ll need to determine whether you’ll use affect extraction, proximity analysis, cognitive mapping, or a combination of these approaches.

Once you’ve coded the data, you’ll be able to analyze it and draw conclusions from the data based on your research questions.

Content analysis offers an inexpensive and flexible way to identify trends and patterns in communication content. In addition, it’s unobtrusive which eliminates many ethical concerns and inaccuracies in research data. However, to be most effective, a content analysis must be planned and used carefully in order to ensure reliability and validity.

The two general types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions.

In qualitative research coding means categorizing concepts, words, and themes within your content to create a basis for analyzing the results. While coding, you work through the content and record the data according to your coding scheme.

Content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. The goal of a content analysis is to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data, depending on the specific use case.

Content analysis is a qualitative method of data analysis and can be used in many different fields. It is particularly popular in the social sciences.

It is possible to do qualitative analysis without coding, but content analysis as a method of qualitative analysis requires coding or categorizing data to then analyze it according to your coding scheme in the next step.

how to write content analysis in research

Logo for Open Educational Resources

Chapter 17. Content Analysis

Introduction.

Content analysis is a term that is used to mean both a method of data collection and a method of data analysis. Archival and historical works can be the source of content analysis, but so too can the contemporary media coverage of a story, blogs, comment posts, films, cartoons, advertisements, brand packaging, and photographs posted on Instagram or Facebook. Really, almost anything can be the “content” to be analyzed. This is a qualitative research method because the focus is on the meanings and interpretations of that content rather than strictly numerical counts or variables-based causal modeling. [1] Qualitative content analysis (sometimes referred to as QCA) is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest—in other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue. This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis. It is also a nice segue between our data collection methods (e.g., interviewing, observation) chapters and chapters 18 and 19, whose focus is on coding, the primary means of data analysis for most qualitative data. In many ways, the methods of content analysis are quite similar to the method of coding.

how to write content analysis in research

Although the body of material (“content”) to be collected and analyzed can be nearly anything, most qualitative content analysis is applied to forms of human communication (e.g., media posts, news stories, campaign speeches, advertising jingles). The point of the analysis is to understand this communication, to systematically and rigorously explore its meanings, assumptions, themes, and patterns. Historical and archival sources may be the subject of content analysis, but there are other ways to analyze (“code”) this data when not overly concerned with the communicative aspect (see chapters 18 and 19). This is why we tend to consider content analysis its own method of data collection as well as a method of data analysis. Still, many of the techniques you learn in this chapter will be helpful to any “coding” scheme you develop for other kinds of qualitative data. Just remember that content analysis is a particular form with distinct aims and goals and traditions.

An Overview of the Content Analysis Process

The first step: selecting content.

Figure 17.2 is a display of possible content for content analysis. The first step in content analysis is making smart decisions about what content you will want to analyze and to clearly connect this content to your research question or general focus of research. Why are you interested in the messages conveyed in this particular content? What will the identification of patterns here help you understand? Content analysis can be fun to do, but in order to make it research, you need to fit it into a research plan.

New stories Blogs Comment posts Lyrics
Letters to editor Films Cartoons Advertisements
Brand packaging Logos Instagram photos Tweets
Photographs Graffiti Street signs Personalized license plates
Avatars (names, shapes, presentations) Nicknames Band posters Building names

Figure 17.1. A Non-exhaustive List of "Content" for Content Analysis

To take one example, let us imagine you are interested in gender presentations in society and how presentations of gender have changed over time. There are various forms of content out there that might help you document changes. You could, for example, begin by creating a list of magazines that are coded as being for “women” (e.g., Women’s Daily Journal ) and magazines that are coded as being for “men” (e.g., Men’s Health ). You could then select a date range that is relevant to your research question (e.g., 1950s–1970s) and collect magazines from that era. You might create a “sample” by deciding to look at three issues for each year in the date range and a systematic plan for what to look at in those issues (e.g., advertisements? Cartoons? Titles of articles? Whole articles?). You are not just going to look at some magazines willy-nilly. That would not be systematic enough to allow anyone to replicate or check your findings later on. Once you have a clear plan of what content is of interest to you and what you will be looking at, you can begin, creating a record of everything you are including as your content. This might mean a list of each advertisement you look at or each title of stories in those magazines along with its publication date. You may decide to have multiple “content” in your research plan. For each content, you want a clear plan for collecting, sampling, and documenting.

The Second Step: Collecting and Storing

Once you have a plan, you are ready to collect your data. This may entail downloading from the internet, creating a Word document or PDF of each article or picture, and storing these in a folder designated by the source and date (e.g., “ Men’s Health advertisements, 1950s”). Sølvberg ( 2021 ), for example, collected posted job advertisements for three kinds of elite jobs (economic, cultural, professional) in Sweden. But collecting might also mean going out and taking photographs yourself, as in the case of graffiti, street signs, or even what people are wearing. Chaise LaDousa, an anthropologist and linguist, took photos of “house signs,” which are signs, often creative and sometimes offensive, hung by college students living in communal off-campus houses. These signs were a focal point of college culture, sending messages about the values of the students living in them. Some of the names will give you an idea: “Boot ’n Rally,” “The Plantation,” “Crib of the Rib.” The students might find these signs funny and benign, but LaDousa ( 2011 ) argued convincingly that they also reproduced racial and gender inequalities. The data here already existed—they were big signs on houses—but the researcher had to collect the data by taking photographs.

In some cases, your content will be in physical form but not amenable to photographing, as in the case of films or unwieldy physical artifacts you find in the archives (e.g., undigitized meeting minutes or scrapbooks). In this case, you need to create some kind of detailed log (fieldnotes even) of the content that you can reference. In the case of films, this might mean watching the film and writing down details for key scenes that become your data. [2] For scrapbooks, it might mean taking notes on what you are seeing, quoting key passages, describing colors or presentation style. As you might imagine, this can take a lot of time. Be sure you budget this time into your research plan.

Researcher Note

A note on data scraping : Data scraping, sometimes known as screen scraping or frame grabbing, is a way of extracting data generated by another program, as when a scraping tool grabs information from a website. This may help you collect data that is on the internet, but you need to be ethical in how to employ the scraper. A student once helped me scrape thousands of stories from the Time magazine archives at once (although it took several hours for the scraping process to complete). These stories were freely available, so the scraping process simply sped up the laborious process of copying each article of interest and saving it to my research folder. Scraping tools can sometimes be used to circumvent paywalls. Be careful here!

The Third Step: Analysis

There is often an assumption among novice researchers that once you have collected your data, you are ready to write about what you have found. Actually, you haven’t yet found anything, and if you try to write up your results, you will probably be staring sadly at a blank page. Between the collection and the writing comes the difficult task of systematically and repeatedly reviewing the data in search of patterns and themes that will help you interpret the data, particularly its communicative aspect (e.g., What is it that is being communicated here, with these “house signs” or in the pages of Men’s Health ?).

The first time you go through the data, keep an open mind on what you are seeing (or hearing), and take notes about your observations that link up to your research question. In the beginning, it can be difficult to know what is relevant and what is extraneous. Sometimes, your research question changes based on what emerges from the data. Use the first round of review to consider this possibility, but then commit yourself to following a particular focus or path. If you are looking at how gender gets made or re-created, don’t follow the white rabbit down a hole about environmental injustice unless you decide that this really should be the focus of your study or that issues of environmental injustice are linked to gender presentation. In the second round of review, be very clear about emerging themes and patterns. Create codes (more on these in chapters 18 and 19) that will help you simplify what you are noticing. For example, “men as outdoorsy” might be a common trope you see in advertisements. Whenever you see this, mark the passage or picture. In your third (or fourth or fifth) round of review, begin to link up the tropes you’ve identified, looking for particular patterns and assumptions. You’ve drilled down to the details, and now you are building back up to figure out what they all mean. Start thinking about theory—either theories you have read about and are using as a frame of your study (e.g., gender as performance theory) or theories you are building yourself, as in the Grounded Theory tradition. Once you have a good idea of what is being communicated and how, go back to the data at least one more time to look for disconfirming evidence. Maybe you thought “men as outdoorsy” was of importance, but when you look hard, you note that women are presented as outdoorsy just as often. You just hadn’t paid attention. It is very important, as any kind of researcher but particularly as a qualitative researcher, to test yourself and your emerging interpretations in this way.

The Fourth and Final Step: The Write-Up

Only after you have fully completed analysis, with its many rounds of review and analysis, will you be able to write about what you found. The interpretation exists not in the data but in your analysis of the data. Before writing your results, you will want to very clearly describe how you chose the data here and all the possible limitations of this data (e.g., historical-trace problem or power problem; see chapter 16). Acknowledge any limitations of your sample. Describe the audience for the content, and discuss the implications of this. Once you have done all of this, you can put forth your interpretation of the communication of the content, linking to theory where doing so would help your readers understand your findings and what they mean more generally for our understanding of how the social world works. [3]

Analyzing Content: Helpful Hints and Pointers

Although every data set is unique and each researcher will have a different and unique research question to address with that data set, there are some common practices and conventions. When reviewing your data, what do you look at exactly? How will you know if you have seen a pattern? How do you note or mark your data?

Let’s start with the last question first. If your data is stored digitally, there are various ways you can highlight or mark up passages. You can, of course, do this with literal highlighters, pens, and pencils if you have print copies. But there are also qualitative software programs to help you store the data, retrieve the data, and mark the data. This can simplify the process, although it cannot do the work of analysis for you.

Qualitative software can be very expensive, so the first thing to do is to find out if your institution (or program) has a universal license its students can use. If they do not, most programs have special student licenses that are less expensive. The two most used programs at this moment are probably ATLAS.ti and NVivo. Both can cost more than $500 [4] but provide everything you could possibly need for storing data, content analysis, and coding. They also have a lot of customer support, and you can find many official and unofficial tutorials on how to use the programs’ features on the web. Dedoose, created by academic researchers at UCLA, is a decent program that lacks many of the bells and whistles of the two big programs. Instead of paying all at once, you pay monthly, as you use the program. The monthly fee is relatively affordable (less than $15), so this might be a good option for a small project. HyperRESEARCH is another basic program created by academic researchers, and it is free for small projects (those that have limited cases and material to import). You can pay a monthly fee if your project expands past the free limits. I have personally used all four of these programs, and they each have their pluses and minuses.

Regardless of which program you choose, you should know that none of them will actually do the hard work of analysis for you. They are incredibly useful for helping you store and organize your data, and they provide abundant tools for marking, comparing, and coding your data so you can make sense of it. But making sense of it will always be your job alone.

So let’s say you have some software, and you have uploaded all of your content into the program: video clips, photographs, transcripts of news stories, articles from magazines, even digital copies of college scrapbooks. Now what do you do? What are you looking for? How do you see a pattern? The answers to these questions will depend partially on the particular research question you have, or at least the motivation behind your research. Let’s go back to the idea of looking at gender presentations in magazines from the 1950s to the 1970s. Here are some things you can look at and code in the content: (1) actions and behaviors, (2) events or conditions, (3) activities, (4) strategies and tactics, (5) states or general conditions, (6) meanings or symbols, (7) relationships/interactions, (8) consequences, and (9) settings. Table 17.1 lists these with examples from our gender presentation study.

Table 17.1. Examples of What to Note During Content Analysis

What can be noted/coded Example from Gender Presentation Study
Actions and behaviors
Events or conditions
Activities
Strategies and tactics
States/conditions
Meanings/symbols
Relationships/interactions
Consequences
Settings

One thing to note about the examples in table 17.1: sometimes we note (mark, record, code) a single example, while other times, as in “settings,” we are recording a recurrent pattern. To help you spot patterns, it is useful to mark every setting, including a notation on gender. Using software can help you do this efficiently. You can then call up “setting by gender” and note this emerging pattern. There’s an element of counting here, which we normally think of as quantitative data analysis, but we are using the count to identify a pattern that will be used to help us interpret the communication. Content analyses often include counting as part of the interpretive (qualitative) process.

In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, “strategies and tactics” is a bit of a stretch here. In studies that are looking specifically at, say, policy implementation or social movements, this category will prove much more salient.

Another way to think about “what to look at” is to consider aspects of your content in terms of units of analysis. You can drill down to the specific words used (e.g., the adjectives commonly used to describe “men” and “women” in your magazine sample) or move up to the more abstract level of concepts used (e.g., the idea that men are more rational than women). Counting for the purpose of identifying patterns is particularly useful here. How many times is that idea of women’s irrationality communicated? How is it is communicated (in comic strips, fictional stories, editorials, etc.)? Does the incidence of the concept change over time? Perhaps the “irrational woman” was everywhere in the 1950s, but by the 1970s, it is no longer showing up in stories and comics. By tracing its usage and prevalence over time, you might come up with a theory or story about gender presentation during the period. Table 17.2 provides more examples of using different units of analysis for this work along with suggestions for effective use.

Table 17.2. Examples of Unit of Analysis in Content Analysis

Unit of Analysis How Used...
Words
Themes
Characters
Paragraphs
Items
Concepts
Semantics

Every qualitative content analysis is unique in its particular focus and particular data used, so there is no single correct way to approach analysis. You should have a better idea, however, of what kinds of things to look for and what to look for. The next two chapters will take you further into the coding process, the primary analytical tool for qualitative research in general.

Further Readings

Cidell, Julie. 2010. “Content Clouds as Exploratory Qualitative Data Analysis.” Area 42(4):514–523. A demonstration of using visual “content clouds” as a form of exploratory qualitative data analysis using transcripts of public meetings and content of newspaper articles.

Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15(9):1277–1288. Distinguishes three distinct approaches to QCA: conventional, directed, and summative. Uses hypothetical examples from end-of-life care research.

Jackson, Romeo, Alex C. Lange, and Antonio Duran. 2021. “A Whitened Rainbow: The In/Visibility of Race and Racism in LGBTQ Higher Education Scholarship.” Journal Committed to Social Change on Race and Ethnicity (JCSCORE) 7(2):174–206.* Using a “critical summative content analysis” approach, examines research published on LGBTQ people between 2009 and 2019.

Krippendorff, Klaus. 2018. Content Analysis: An Introduction to Its Methodology . 4th ed. Thousand Oaks, CA: SAGE. A very comprehensive textbook on both quantitative and qualitative forms of content analysis.

Mayring, Philipp. 2022. Qualitative Content Analysis: A Step-by-Step Guide . Thousand Oaks, CA: SAGE. Formulates an eight-step approach to QCA.

Messinger, Adam M. 2012. “Teaching Content Analysis through ‘Harry Potter.’” Teaching Sociology 40(4):360–367. This is a fun example of a relatively brief foray into content analysis using the music found in Harry Potter films.

Neuendorft, Kimberly A. 2002. The Content Analysis Guidebook . Thousand Oaks, CA: SAGE. Although a helpful guide to content analysis in general, be warned that this textbook definitely favors quantitative over qualitative approaches to content analysis.

Schrier, Margrit. 2012. Qualitative Content Analysis in Practice . Thousand Okas, CA: SAGE. Arguably the most accessible guidebook for QCA, written by a professor based in Germany.

Weber, Matthew A., Shannon Caplan, Paul Ringold, and Karen Blocksom. 2017. “Rivers and Streams in the Media: A Content Analysis of Ecosystem Services.” Ecology and Society 22(3).* Examines the content of a blog hosted by National Geographic and articles published in The New York Times and the Wall Street Journal for stories on rivers and streams (e.g., water-quality flooding).

  • There are ways of handling content analysis quantitatively, however. Some practitioners therefore specify qualitative content analysis (QCA). In this chapter, all content analysis is QCA unless otherwise noted. ↵
  • Note that some qualitative software allows you to upload whole films or film clips for coding. You will still have to get access to the film, of course. ↵
  • See chapter 20 for more on the final presentation of research. ↵
  • . Actually, ATLAS.ti is an annual license, while NVivo is a perpetual license, but both are going to cost you at least $500 to use. Student rates may be lower. And don’t forget to ask your institution or program if they already have a software license you can use. ↵

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

how to write content analysis in research

What Is Qualitative Content Analysis?

Qca explained simply (with examples).

By: Jenna Crosley (PhD). Reviewed by: Dr Eunice Rautenbach (DTech) | February 2021

If you’re in the process of preparing for your dissertation, thesis or research project, you’ve probably encountered the term “ qualitative content analysis ” – it’s quite a mouthful. If you’ve landed on this post, you’re probably a bit confused about it. Well, the good news is that you’ve come to the right place…

Overview: Qualitative Content Analysis

  • What (exactly) is qualitative content analysis
  • The two main types of content analysis
  • When to use content analysis
  • How to conduct content analysis (the process)
  • The advantages and disadvantages of content analysis

1. What is content analysis?

Content analysis is a  qualitative analysis method  that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants – this is called  unobtrusive  research.

In other words, with content analysis, you don’t necessarily need to interact with participants (although you can if necessary); you can simply analyse the data that they have already produced. With this type of analysis, you can analyse data such as text messages, books, Facebook posts, videos, and audio (just to mention a few).

The basics – explicit and implicit content

When working with content analysis, explicit and implicit content will play a role. Explicit data is transparent and easy to identify, while implicit data is that which requires some form of interpretation and is often of a subjective nature. Sounds a bit fluffy? Here’s an example:

Joe: Hi there, what can I help you with? 

Lauren: I recently adopted a puppy and I’m worried that I’m not feeding him the right food. Could you please advise me on what I should be feeding? 

Joe: Sure, just follow me and I’ll show you. Do you have any other pets?

Lauren: Only one, and it tweets a lot!

In this exchange, the explicit data indicates that Joe is helping Lauren to find the right puppy food. Lauren asks Joe whether she has any pets aside from her puppy. This data is explicit because it requires no interpretation.

On the other hand, implicit data , in this case, includes the fact that the speakers are in a pet store. This information is not clearly stated but can be inferred from the conversation, where Joe is helping Lauren to choose pet food. An additional piece of implicit data is that Lauren likely has some type of bird as a pet. This can be inferred from the way that Lauren states that her pet “tweets”.

As you can see, explicit and implicit data both play a role in human interaction  and are an important part of your analysis. However, it’s important to differentiate between these two types of data when you’re undertaking content analysis. Interpreting implicit data can be rather subjective as conclusions are based on the researcher’s interpretation. This can introduce an element of bias , which risks skewing your results.

Explicit and implicit data both play an important role in your content analysis, but it’s important to differentiate between them.

2. The two types of content analysis

Now that you understand the difference between implicit and explicit data, let’s move on to the two general types of content analysis : conceptual and relational content analysis. Importantly, while conceptual and relational content analysis both follow similar steps initially, the aims and outcomes of each are different.

Conceptual analysis focuses on the number of times a concept occurs in a set of data and is generally focused on explicit data. For example, if you were to have the following conversation:

Marie: She told me that she has three cats.

Jean: What are her cats’ names?

Marie: I think the first one is Bella, the second one is Mia, and… I can’t remember the third cat’s name.

In this data, you can see that the word “cat” has been used three times. Through conceptual content analysis, you can deduce that cats are the central topic of the conversation. You can also perform a frequency analysis , where you assess the term’s frequency in the data. For example, in the exchange above, the word “cat” makes up 9% of the data. In other words, conceptual analysis brings a little bit of quantitative analysis into your qualitative analysis.

As you can see, the above data is without interpretation and focuses on explicit data . Relational content analysis, on the other hand, takes a more holistic view by focusing more on implicit data in terms of context, surrounding words and relationships.

There are three types of relational analysis:

  • Affect extraction
  • Proximity analysis
  • Cognitive mapping

Affect extraction is when you assess concepts according to emotional attributes. These emotions are typically mapped on scales, such as a Likert scale or a rating scale ranging from 1 to 5, where 1 is “very sad” and 5 is “very happy”.

If participants are talking about their achievements, they are likely to be given a score of 4 or 5, depending on how good they feel about it. If a participant is describing a traumatic event, they are likely to have a much lower score, either 1 or 2.

Proximity analysis identifies explicit terms (such as those found in a conceptual analysis) and the patterns in terms of how they co-occur in a text. In other words, proximity analysis investigates the relationship between terms and aims to group these to extract themes and develop meaning.

Proximity analysis is typically utilised when you’re looking for hard facts rather than emotional, cultural, or contextual factors. For example, if you were to analyse a political speech, you may want to focus only on what has been said, rather than implications or hidden meanings. To do this, you would make use of explicit data, discounting any underlying meanings and implications of the speech.

Lastly, there’s cognitive mapping, which can be used in addition to, or along with, proximity analysis. Cognitive mapping involves taking different texts and comparing them in a visual format – i.e. a cognitive map. Typically, you’d use cognitive mapping in studies that assess changes in terms, definitions, and meanings over time. It can also serve as a way to visualise affect extraction or proximity analysis and is often presented in a form such as a graphic map.

Example of a cognitive map

To recap on the essentials, content analysis is a qualitative analysis method that focuses on recorded human artefacts . It involves both conceptual analysis (which is more numbers-based) and relational analysis (which focuses on the relationships between concepts and how they’re connected).

Need a helping hand?

how to write content analysis in research

3. When should you use content analysis?

Content analysis is a useful tool that provides insight into trends of communication . For example, you could use a discussion forum as the basis of your analysis and look at the types of things the members talk about as well as how they use language to express themselves. Content analysis is flexible in that it can be applied to the individual, group, and institutional level.

Content analysis is typically used in studies where the aim is to better understand factors such as behaviours, attitudes, values, emotions, and opinions . For example, you could use content analysis to investigate an issue in society, such as miscommunication between cultures. In this example, you could compare patterns of communication in participants from different cultures, which will allow you to create strategies for avoiding misunderstandings in intercultural interactions.

Another example could include conducting content analysis on a publication such as a book. Here you could gather data on the themes, topics, language use and opinions reflected in the text to draw conclusions regarding the political (such as conservative or liberal) leanings of the publication.

Content analysis is typically used in projects where the research aims involve getting a better understanding of factors such as behaviours, attitudes, values, emotions, and opinions.

4. How to conduct a qualitative content analysis

Conceptual and relational content analysis differ in terms of their exact process ; however, there are some similarities. Let’s have a look at these first – i.e., the generic process:

  • Recap on your research questions
  • Undertake bracketing to identify biases
  • Operationalise your variables and develop a coding scheme
  • Code the data and undertake your analysis

Step 1 – Recap on your research questions

It’s always useful to begin a project with research questions , or at least with an idea of what you are looking for. In fact, if you’ve spent time reading this blog, you’ll know that it’s useful to recap on your research questions, aims and objectives when undertaking pretty much any research activity. In the context of content analysis, it’s difficult to know what needs to be coded and what doesn’t, without a clear view of the research questions.

For example, if you were to code a conversation focused on basic issues of social justice, you may be met with a wide range of topics that may be irrelevant to your research. However, if you approach this data set with the specific intent of investigating opinions on gender issues, you will be able to focus on this topic alone, which would allow you to code only what you need to investigate.

With content analysis, it’s difficult to know what needs to be coded  without a clear view of the research questions.

Step 2 – Reflect on your personal perspectives and biases

It’s vital that you reflect on your own pre-conception of the topic at hand and identify the biases that you might drag into your content analysis – this is called “ bracketing “. By identifying this upfront, you’ll be more aware of them and less likely to have them subconsciously influence your analysis.

For example, if you were to investigate how a community converses about unequal access to healthcare, it is important to assess your views to ensure that you don’t project these onto your understanding of the opinions put forth by the community. If you have access to medical aid, for instance, you should not allow this to interfere with your examination of unequal access.

You must reflect on the preconceptions and biases that you might drag into your content analysis - this is called "bracketing".

Step 3 – Operationalise your variables and develop a coding scheme

Next, you need to operationalise your variables . But what does that mean? Simply put, it means that you have to define each variable or construct . Give every item a clear definition – what does it mean (include) and what does it not mean (exclude). For example, if you were to investigate children’s views on healthy foods, you would first need to define what age group/range you’re looking at, and then also define what you mean by “healthy foods”.

In combination with the above, it is important to create a coding scheme , which will consist of information about your variables (how you defined each variable), as well as a process for analysing the data. For this, you would refer back to how you operationalised/defined your variables so that you know how to code your data.

For example, when coding, when should you code a food as “healthy”? What makes a food choice healthy? Is it the absence of sugar or saturated fat? Is it the presence of fibre and protein? It’s very important to have clearly defined variables to achieve consistent coding – without this, your analysis will get very muddy, very quickly.

When operationalising your variables, you must give every item a clear definition. In other words, what does it mean (include) and what does it not mean (exclude).

Step 4 – Code and analyse the data

The next step is to code the data. At this stage, there are some differences between conceptual and relational analysis.

As described earlier in this post, conceptual analysis looks at the existence and frequency of concepts, whereas a relational analysis looks at the relationships between concepts. For both types of analyses, it is important to pre-select a concept that you wish to assess in your data. Using the example of studying children’s views on healthy food, you could pre-select the concept of “healthy food” and assess the number of times the concept pops up in your data.

Here is where conceptual and relational analysis start to differ.

At this stage of conceptual analysis , it is necessary to decide on the level of analysis you’ll perform on your data, and whether this will exist on the word, phrase, sentence, or thematic level. For example, will you code the phrase “healthy food” on its own? Will you code each term relating to healthy food (e.g., broccoli, peaches, bananas, etc.) with the code “healthy food” or will these be coded individually? It is very important to establish this from the get-go to avoid inconsistencies that could result in you having to code your data all over again.

On the other hand, relational analysis looks at the type of analysis. So, will you use affect extraction? Proximity analysis? Cognitive mapping? A mix? It’s vital to determine the type of analysis before you begin to code your data so that you can maintain the reliability and validity of your research .

how to write content analysis in research

How to conduct conceptual analysis

First, let’s have a look at the process for conceptual analysis.

Once you’ve decided on your level of analysis, you need to establish how you will code your concepts, and how many of these you want to code. Here you can choose whether you want to code in a deductive or inductive manner. Just to recap, deductive coding is when you begin the coding process with a set of pre-determined codes, whereas inductive coding entails the codes emerging as you progress with the coding process. Here it is also important to decide what should be included and excluded from your analysis, and also what levels of implication you wish to include in your codes.

For example, if you have the concept of “tall”, can you include “up in the clouds”, derived from the sentence, “the giraffe’s head is up in the clouds” in the code, or should it be a separate code? In addition to this, you need to know what levels of words may be included in your codes or not. For example, if you say, “the panda is cute” and “look at the panda’s cuteness”, can “cute” and “cuteness” be included under the same code?

Once you’ve considered the above, it’s time to code the text . We’ve already published a detailed post about coding , so we won’t go into that process here. Once you’re done coding, you can move on to analysing your results. This is where you will aim to find generalisations in your data, and thus draw your conclusions .

How to conduct relational analysis

Now let’s return to relational analysis.

As mentioned, you want to look at the relationships between concepts . To do this, you’ll need to create categories by reducing your data (in other words, grouping similar concepts together) and then also code for words and/or patterns. These are both done with the aim of discovering whether these words exist, and if they do, what they mean.

Your next step is to assess your data and to code the relationships between your terms and meanings, so that you can move on to your final step, which is to sum up and analyse the data.

To recap, it’s important to start your analysis process by reviewing your research questions and identifying your biases . From there, you need to operationalise your variables, code your data and then analyse it.

Time to analyse

5. What are the pros & cons of content analysis?

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

For example, with conceptual analysis, you can count the number of times that a term or a code appears in a dataset, which can be assessed from a quantitative standpoint. In addition to this, you can then use a qualitative approach to investigate the underlying meanings of these and relationships between them.

Content analysis is also unobtrusive and therefore poses fewer ethical issues than some other analysis methods. As the content you’ll analyse oftentimes already exists, you’ll analyse what has been produced previously, and so you won’t have to collect data directly from participants. When coded correctly, data is analysed in a very systematic and transparent manner, which means that issues of replicability (how possible it is to recreate research under the same conditions) are reduced greatly.

On the downside , qualitative research (in general, not just content analysis) is often critiqued for being too subjective and for not being scientifically rigorous enough. This is where reliability (how replicable a study is by other researchers) and validity (how suitable the research design is for the topic being investigated) come into play – if you take these into account, you’ll be on your way to achieving sound research results.

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

Recap: Qualitative content analysis

In this post, we’ve covered a lot of ground – click on any of the sections to recap:

If you have any questions about qualitative content analysis, feel free to leave a comment below. If you’d like 1-on-1 help with your qualitative content analysis, be sure to book an initial consultation with one of our friendly Research Coaches.

how to write content analysis in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

19 Comments

Abhishek

If I am having three pre-decided attributes for my research based on which a set of semi-structured questions where asked then should I conduct a conceptual content analysis or relational content analysis. please note that all three attributes are different like Agility, Resilience and AI.

Ofori Henry Affum

Thank you very much. I really enjoyed every word.

Janak Raj Bhatta

please send me one/ two sample of content analysis

pravin

send me to any sample of qualitative content analysis as soon as possible

abdellatif djedei

Many thanks for the brilliant explanation. Do you have a sample practical study of a foreign policy using content analysis?

DR. TAPAS GHOSHAL

1) It will be very much useful if a small but complete content analysis can be sent, from research question to coding and analysis. 2) Is there any software by which qualitative content analysis can be done?

Carkanirta

Common software for qualitative analysis is nVivo, and quantitative analysis is IBM SPSS

carmely

Thank you. Can I have at least 2 copies of a sample analysis study as my reference?

Yang

Could you please send me some sample of textbook content analysis?

Abdoulie Nyassi

Can I send you my research topic, aims, objectives and questions to give me feedback on them?

Bobby Benjamin Simeon

please could you send me samples of content analysis?

Obi Clara Chisom

Yes please send

Gaid Ahmed

really we enjoyed your knowledge thanks allot. from Ethiopia

Ary

can you please share some samples of content analysis(relational)? I am a bit confused about processing the analysis part

eeeema

Is it possible for you to list the journal articles and books or other sources you used to write this article? Thank you.

Upeksha Hettithanthri

can you please send some samples of content analysis ?

can you kindly send some good examples done by using content analysis ?

samuel batimedi

This was very useful. can you please send me sample for qualitative content analysis. thank you

Lawal Ridwan Olalekan

What a brilliant explanation! Kindly help with textbooks or blogs on the context analysis method such as discourse, thematic and semiotic analysis.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

how to write content analysis in research

Using Content Analysis

This guide provides an introduction to content analysis, a research methodology that examines words or phrases within a wide range of texts.

  • Introduction to Content Analysis : Read about the history and uses of content analysis.
  • Conceptual Analysis : Read an overview of conceptual analysis and its associated methodology.
  • Relational Analysis : Read an overview of relational analysis and its associated methodology.
  • Commentary : Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology.
  • Examples : View examples of real and hypothetical studies that use content analysis.
  • Annotated Bibliography : Complete list of resources used in this guide and beyond.

An Introduction to Content Analysis

Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of which these are a part. Texts can be defined broadly as books, book chapters, essays, interviews, discussions, newspaper headlines and articles, historical documents, speeches, conversations, advertising, theater, informal conversation, or really any occurrence of communicative language. Texts in a single study may also represent a variety of different types of occurrences, such as Palmquist's 1990 study of two composition classes, in which he analyzed student and teacher interviews, writing journals, classroom discussions and lectures, and out-of-class interaction sheets. To conduct a content analysis on any such text, the text is coded, or broken down, into manageable categories on a variety of levels--word, word sense, phrase, sentence, or theme--and then examined using one of content analysis' basic methods: conceptual analysis or relational analysis.

A Brief History of Content Analysis

Historically, content analysis was a time consuming process. Analysis was done manually, or slow mainframe computers were used to analyze punch cards containing data punched in by human coders. Single studies could employ thousands of these cards. Human error and time constraints made this method impractical for large texts. However, despite its impracticality, content analysis was already an often utilized research method by the 1940's. Although initially limited to studies that examined texts for the frequency of the occurrence of identified terms (word counts), by the mid-1950's researchers were already starting to consider the need for more sophisticated methods of analysis, focusing on concepts rather than simply words, and on semantic relationships rather than just presence (de Sola Pool 1959). While both traditions still continue today, content analysis now is also utilized to explore mental models, and their linguistic, affective, cognitive, social, cultural and historical significance.

Uses of Content Analysis

Perhaps due to the fact that it can be applied to examine any piece of writing or occurrence of recorded communication, content analysis is currently used in a dizzying array of fields, ranging from marketing and media studies, to literature and rhetoric, ethnography and cultural studies, gender and age issues, sociology and political science, psychology and cognitive science, and many other fields of inquiry. Additionally, content analysis reflects a close relationship with socio- and psycholinguistics, and is playing an integral role in the development of artificial intelligence. The following list (adapted from Berelson, 1952) offers more possibilities for the uses of content analysis:

  • Reveal international differences in communication content
  • Detect the existence of propaganda
  • Identify the intentions, focus or communication trends of an individual, group or institution
  • Describe attitudinal and behavioral responses to communications
  • Determine psychological or emotional state of persons or groups

Types of Content Analysis

In this guide, we discuss two general categories of content analysis: conceptual analysis and relational analysis. Conceptual analysis can be thought of as establishing the existence and frequency of concepts most often represented by words of phrases in a text. For instance, say you have a hunch that your favorite poet often writes about hunger. With conceptual analysis you can determine how many times words such as hunger, hungry, famished, or starving appear in a volume of poems. In contrast, relational analysis goes one step further by examining the relationships among concepts in a text. Returning to the hunger example, with relational analysis, you could identify what other words or phrases hunger or famished appear next to and then determine what different meanings emerge as a result of these groupings.

Conceptual Analysis

Traditionally, content analysis has most often been thought of in terms of conceptual analysis. In conceptual analysis, a concept is chosen for examination, and the analysis involves quantifying and tallying its presence. Also known as thematic analysis [although this term is somewhat problematic, given its varied definitions in current literature--see Palmquist, Carley, & Dale (1997) vis-a-vis Smith (1992)], the focus here is on looking at the occurrence of selected terms within a text or texts, although the terms may be implicit as well as explicit. While explicit terms obviously are easy to identify, coding for implicit terms and deciding their level of implication is complicated by the need to base judgments on a somewhat subjective system. To attempt to limit the subjectivity, then (as well as to limit problems of reliability and validity ), coding such implicit terms usually involves the use of either a specialized dictionary or contextual translation rules. And sometimes, both tools are used--a trend reflected in recent versions of the Harvard and Lasswell dictionaries.

Methods of Conceptual Analysis

Conceptual analysis begins with identifying research questions and choosing a sample or samples. Once chosen, the text must be coded into manageable content categories. The process of coding is basically one of selective reduction . By reducing the text to categories consisting of a word, set of words or phrases, the researcher can focus on, and code for, specific words or patterns that are indicative of the research question.

An example of a conceptual analysis would be to examine several Clinton speeches on health care, made during the 1992 presidential campaign, and code them for the existence of certain words. In looking at these speeches, the research question might involve examining the number of positive words used to describe Clinton's proposed plan, and the number of negative words used to describe the current status of health care in America. The researcher would be interested only in quantifying these words, not in examining how they are related, which is a function of relational analysis. In conceptual analysis, the researcher simply wants to examine presence with respect to his/her research question, i.e. is there a stronger presence of positive or negative words used with respect to proposed or current health care plans, respectively.

Once the research question has been established, the researcher must make his/her coding choices with respect to the eight category coding steps indicated by Carley (1992).

Steps for Conducting Conceptual Analysis

The following discussion of steps that can be followed to code a text or set of texts during conceptual analysis use campaign speeches made by Bill Clinton during the 1992 presidential campaign as an example. To read about each step, click on the items in the list below:

  • Decide the level of analysis.

First, the researcher must decide upon the level of analysis . With the health care speeches, to continue the example, the researcher must decide whether to code for a single word, such as "inexpensive," or for sets of words or phrases, such as "coverage for everyone."

  • Decide how many concepts to code for.

The researcher must now decide how many different concepts to code for. This involves developing a pre-defined or interactive set of concepts and categories. The researcher must decide whether or not to code for every single positive or negative word that appears, or only certain ones that the researcher determines are most relevant to health care. Then, with this pre-defined number set, the researcher has to determine how much flexibility he/she allows him/herself when coding. The question of whether the researcher codes only from this pre-defined set, or allows him/herself to add relevant categories not included in the set as he/she finds them in the text, must be answered. Determining a certain number and set of concepts allows a researcher to examine a text for very specific things, keeping him/her on task. But introducing a level of coding flexibility allows new, important material to be incorporated into the coding process that could have significant bearings on one's results.

  • Decide whether to code for existence or frequency of a concept.

After a certain number and set of concepts are chosen for coding , the researcher must answer a key question: is he/she going to code for existence or frequency ? This is important, because it changes the coding process. When coding for existence, "inexpensive" would only be counted once, no matter how many times it appeared. This would be a very basic coding process and would give the researcher a very limited perspective of the text. However, the number of times "inexpensive" appears in a text might be more indicative of importance. Knowing that "inexpensive" appeared 50 times, for example, compared to 15 appearances of "coverage for everyone," might lead a researcher to interpret that Clinton is trying to sell his health care plan based more on economic benefits, not comprehensive coverage. Knowing that "inexpensive" appeared, but not that it appeared 50 times, would not allow the researcher to make this interpretation, regardless of whether it is valid or not.

  • Decide on how you will distinguish among concepts.

The researcher must next decide on the , i.e. whether concepts are to be coded exactly as they appear, or if they can be recorded as the same even when they appear in different forms. For example, "expensive" might also appear as "expensiveness." The research needs to determine if the two words mean radically different things to him/her, or if they are similar enough that they can be coded as being the same thing, i.e. "expensive words." In line with this, is the need to determine the level of implication one is going to allow. This entails more than subtle differences in tense or spelling, as with "expensive" and "expensiveness." Determining the level of implication would allow the researcher to code not only for the word "expensive," but also for words that imply "expensive." This could perhaps include technical words, jargon, or political euphemism, such as "economically challenging," that the researcher decides does not merit a separate category, but is better represented under the category "expensive," due to its implicit meaning of "expensive."

  • Develop rules for coding your texts.

After taking the generalization of concepts into consideration, a researcher will want to create translation rules that will allow him/her to streamline and organize the coding process so that he/she is coding for exactly what he/she wants to code for. Developing a set of rules helps the researcher insure that he/she is coding things consistently throughout the text, in the same way every time. If a researcher coded "economically challenging" as a separate category from "expensive" in one paragraph, then coded it under the umbrella of "expensive" when it occurred in the next paragraph, his/her data would be invalid. The interpretations drawn from that data will subsequently be invalid as well. Translation rules protect against this and give the coding process a crucial level of consistency and coherence.

  • Decide what to do with "irrelevant" information.

The next choice a researcher must make involves irrelevant information . The researcher must decide whether irrelevant information should be ignored (as Weber, 1990, suggests), or used to reexamine and/or alter the coding scheme. In the case of this example, words like "and" and "the," as they appear by themselves, would be ignored. They add nothing to the quantification of words like "inexpensive" and "expensive" and can be disregarded without impacting the outcome of the coding.

  • Code the texts.

Once these choices about irrelevant information are made, the next step is to code the text. This is done either by hand, i.e. reading through the text and manually writing down concept occurrences, or through the use of various computer programs. Coding with a computer is one of contemporary conceptual analysis' greatest assets. By inputting one's categories, content analysis programs can easily automate the coding process and examine huge amounts of data, and a wider range of texts, quickly and efficiently. But automation is very dependent on the researcher's preparation and category construction. When coding is done manually, a researcher can recognize errors far more easily. A computer is only a tool and can only code based on the information it is given. This problem is most apparent when coding for implicit information, where category preparation is essential for accurate coding.

  • Analyze your results.

Once the coding is done, the researcher examines the data and attempts to draw whatever conclusions and generalizations are possible. Of course, before these can be drawn, the researcher must decide what to do with the information in the text that is not coded. One's options include either deleting or skipping over unwanted material, or viewing all information as relevant and important and using it to reexamine, reassess and perhaps even alter one's coding scheme. Furthermore, given that the conceptual analyst is dealing only with quantitative data, the levels of interpretation and generalizability are very limited. The researcher can only extrapolate as far as the data will allow. But it is possible to see trends, for example, that are indicative of much larger ideas. Using the example from step three, if the concept "inexpensive" appears 50 times, compared to 15 appearances of "coverage for everyone," then the researcher can pretty safely extrapolate that there does appear to be a greater emphasis on the economics of the health care plan, as opposed to its universal coverage for all Americans. It must be kept in mind that conceptual analysis, while extremely useful and effective for providing this type of information when done right, is limited by its focus and the quantitative nature of its examination. To more fully explore the relationships that exist between these concepts, one must turn to relational analysis.

Relational Analysis

Relational analysis, like conceptual analysis, begins with the act of identifying concepts present in a given text or set of texts. However, relational analysis seeks to go beyond presence by exploring the relationships between the concepts identified. Relational analysis has also been termed semantic analysis (Palmquist, Carley, & Dale, 1997). In other words, the focus of relational analysis is to look for semantic, or meaningful, relationships. Individual concepts, in and of themselves, are viewed as having no inherent meaning. Rather, meaning is a product of the relationships among concepts in a text. Carley (1992) asserts that concepts are "ideational kernels;" these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.

Theoretical Influences on Relational Analysis

The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.

Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. One example of this type of research is Gottschalk (1975), who developed an automated procedure which analyzes each clause in a text and assigns it a numerical score based on several emotional/psychological scales. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation (Carley, 1990).

Approaches that derive from cognitive science include the creation of decision maps and mental models. Decision maps attempt to represent the relationship(s) between ideas, beliefs, attitudes, and information available to an author when making a decision within a text. These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise (1987) used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.

Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality. According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed. Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:

  • Identifing concepts
  • Defining relationship types
  • Coding the text on the basis of 1 and 2
  • Coding the statements
  • Graphically displaying and numerically analyzing the resulting maps

To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley (1990) asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.

Relational Analysis: Overview of Methods

As with other sorts of inquiry, initial choices with regard to what is being studied and/or coded for often determine the possibilities of that particular study. For relational analysis, it is important to first decide which concept type(s) will be explored in the analysis. Studies have been conducted with as few as one and as many as 500 concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.

The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities. The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time. The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods.

Three Subcategories of Relational Analysis

Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text. It is problematic because emotion may vary across time and populations. Nevertheless, when extended it can be a potent means of exploring the emotional/psychological state of the speaker and/or writer. Gottschalk (1995) provides an example of this type of analysis. By assigning concepts identified a numeric value on corresponding emotional/psychological scales that can then be statistically examined, Gottschalk claims that the emotional/psychological state of the speaker or writer can be ascertained via their verbal behavior.

Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window , is determined. The window is then scanned across a text to check for the co-occurrence of concepts. The result is the creation of a concept determined by the concept matrix . In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence. Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis.

Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison. Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts.

In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts. This is known as map analysis which allows for comparisons to explore "how meanings and definitions shift across people and time" (Palmquist, Carley, & Dale, 1997). Maps can depict a variety of different mental models (such as that of the text, the writer/speaker, or the social group/period), according to the focus of the researcher. This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks (Carley, 1990). Given these assumptions, it's not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.

Steps for Conducting Relational Analysis

The following discussion of the steps (or, perhaps more accurately, strategies) that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the 1998 hearings.

  • Identify the Question.

The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete. Possibilities for the Hairy Hearings of 1998 might be:

What did Bill Clinton say in the speech? OR What concrete information did he present to the public?
  • Choose a sample or samples for analysis.

Once the question has been identified, the researcher must select sections of text/speech from the hearings in which Bill Clinton may have not told the entire truth or is obviously holding back information. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.

  • Determine the type of analysis.

Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts.

In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.

Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as "perhaps," or for sets of words or phrases like "I may have forgotten."

  • Reduce the text to categories and code for words or patterns.

At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy. For example, Palmquist (1990) did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end. On the other hand, the requirement of one's specific research question may necessitate deeper levels of coding to preserve greater detail for analysis.

In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or "re-evaluation." The researcher might also choose to code for what words he used that have such an ambiguous nature in relation to the importance of the information directly related to those words.

  • Explore the relationships between concepts (Strength, Sign & Direction).

Once words are coded, the text can be analyzed for the relationships among the concepts set forth. There are three concepts which play a central role in exploring the relations among concepts in content analysis.

  • Strength of Relationship: Refers to the degree to which two or more concepts are related. These relationships are easiest to analyze, compare, and graph when all relationships between concepts are considered to be equal. However, assigning strength to relationships retains a greater degree of the detail found in the original text. Identifying strength of a relationship is key when determining whether or not words like unless, perhaps, or maybe are related to a particular section of text, phrase, or idea.
  • Sign of a Relationship: Refers to whether or not the concepts are positively or negatively related. To illustrate, the concept "bear" is negatively related to the concept "stock market" in the same sense as the concept "bull" is positively related. Thus "it's a bear market" could be coded to show a negative relationship between "bear" and "market". Another approach to coding for strength entails the creation of separate categories for binary oppositions. The above example emphasizes "bull" as the negation of "bear," but could be coded as being two separate categories, one positive and one negative. There has been little research to determine the benefits and liabilities of these differing strategies. Use of Sign coding for relationships in regard to the hearings my be to find out whether or not the words under observation or in question were used adversely or in favor of the concepts (this is tricky, but important to establishing meaning).
  • Direction of the Relationship: Refers to the type of relationship categories exhibit. Coding for this sort of information can be useful in establishing, for example, the impact of new information in a decision making process. Various types of directional relationships include, "X implies Y," "X occurs before Y" and "if X then Y," or quite simply the decision whether concept X is the "prime mover" of Y or vice versa. In the case of the 1998 hearings, the researcher might note that, "maybe implies doubt," "perhaps occurs before statements of clarification," and "if possibly exists, then there is room for Clinton to change his stance." In some cases, concepts can be said to be bi-directional, or having equal influence. This is equivalent to ignoring directionality. Both approaches are useful, but differ in focus. Coding all categories as bi-directional is most useful for exploratory studies where pre-coding may influence results, and is also most easily automated, or computer coded.
  • Code the relationships.

One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded. At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton's speech are just fillers, or hold information about the statements he is making.

  • Perform Statisical Analyses.

This step involves conducting statistical analyses of the data you've coded during your relational analysis. This may involve exploring for differences or looking for relationships among the variables you've identified in your study.

  • Map out the Representations.

In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text (or across texts) in a graphical -- or map -- form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models.

The authors of this guide have created the following commentaries on content analysis.

Issues of Reliability & Validity

The issues of reliability and validity are concurrent with those addressed in other research methods. The reliability of a content analysis study refers to its stability , or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility , or the tendency for a group of coders to classify categories membership in the same way; and accuracy , or the extent to which the classification of a text corresponds to a standard or norm statistically. Gottschalk (1995) points out that the issue of reliability may be further complicated by the inescapably human nature of researchers. For this reason, he suggests that coding errors can only be minimized, and not eliminated (he shoots for 80% as an acceptable margin for reliability).

On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions , and the generalizability of results to a theory.

The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category. For example, a content analysis study might measure the occurrence of the concept category "communist" in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as "red," "Soviet threat," "pinkos," "godless infidels" and "Marxist sympathizers." "Communist" is held to be the explicit variable, while "red," etc. are the implicit variables.

The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.e. do the conclusions follow from the data or are they explainable due to some other phenomenon? For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered.

This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one's results, invalidating any conclusions one infers from the results. The word "mine," for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage. For example, one may find 50 occurrences of the word "mine." But, if one is only looking specifically for "mine" as an explosive device, and 17 of the occurrences are actually personal pronouns, the resulting 50 is an inaccurate result. Any conclusions drawn as a result of that number would render that conclusion invalid.

The generalizability of one's conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. It is imperative that one defines categories that accurately measure the idea and/or items one is seeking to measure. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability , is essential to the success of a conceptual analysis. Reproducibility , not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound. A study which does this, i.e. in which the classification of a text corresponds to a standard or norm, is said to have accuracy .

Advantages of Content Analysis

Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:

  • looks directly at communication via texts or transcripts, and hence gets at the central aspect of social interaction
  • can allow for both quantitative and qualitative operations
  • can provides valuable historical/cultural insights over time through analysis of texts
  • allows a closeness to text which can alternate between specific categories and relationships and also statistically analyzes the coded form of the text
  • can be used to interpret texts for purposes such as the development of expert systems (since knowledge and rules can both be coded in terms of explicit statements about the relationships among concepts)
  • is an unobtrusive means of analyzing interactions
  • provides insight into complex models of human thought and language use

Disadvantages of Content Analysis

Content analysis suffers from several disadvantages, both theoretical and procedural. In particular, content analysis:

  • can be extremely time consuming
  • is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
  • is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
  • is inherently reductive, particularly when dealing with complex texts
  • tends too often to simply consist of word counts
  • often disregards the context that produced the text, as well as the state of things after the text is produced
  • can be difficult to automate or computerize

The Palmquist, Carley and Dale study, a summary of "Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts" (1997) is an example of two studies that have been conducted using both conceptual and relational analysis. The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study.

Related Information: Example of a Problematic Text for Content Analysis

In this example, both students observed a scientist and were asked to write about the experience.

Student A: I found that scientists engage in research in order to make discoveries and generate new ideas. Such research by scientists is hard work and often involves collaboration with other scientists which leads to discoveries which make the scientists famous. Such collaboration may be informal, such as when they share new ideas over lunch, or formal, such as when they are co-authors of a paper.
Student B: It was hard work to research famous scientists engaged in collaboration and I made many informal discoveries. My research showed that scientists engaged in collaboration with other scientists are co-authors of at least one paper containing their new ideas. Some scientists make formal discoveries and have new ideas.

Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of "I, scientist, research, hard work, collaboration, discoveries, new ideas, etc..." are explicit in both texts, occur the same number of times, and have the same emphasis. Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what "I" found out about "scientists," and elaborated the notion of "scientists" doing "research." Student B focuses on what "I's" research was and sees scientists as "making discoveries" without emphasis on research.

Related Information: The Palmquist, Carley and Dale Study

Consider these two questions: How has the depiction of robots changed over more than a century's worth of writing? And, do students and writing instructors share the same terms for describing the writing process? Although these questions seem totally unrelated, they do share a commonality: in the Palmquist, Carley & Dale study, their answers rely on computer-aided text analysis to demonstrate how different texts can be analyzed.

Literary texts

One half of the study explored the depiction of robots in 27 science fiction texts written between 1818 and 1988. After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time. To do this, researchers had to create concept lists and relationship types, create maps using a computer software (see Fig. 1), modify those maps and then ultimately analyze them. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids.

Non-literary texts

The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken. The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process. Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. Maps were also created and analyzed in this study (see Fig. 2).

Annotated Bibliography

Resources On How To Conduct Content Analysis

Beard, J., & Yaprak, A. (1989). Language implications for advertising in international markets: A model for message content and message execution. A paper presented at the 8th International Conference on Language Communication for World Business and the Professions. Ann Arbor, MI.

This report discusses the development and testing of a content analysis model for assessing advertising themes and messages aimed primarily at U.S. markets which seeks to overcome barriers in the cultural environment of international markets. Texts were categorized under 3 headings: rational, emotional, and moral. The goal here was to teach students to appreciate differences in language and culture.

Berelson, B. (1971). Content analysis in communication research . New York: Hafner Publishing Company.

While this book provides an extensive outline of the uses of content analysis, it is far more concerned with conveying a critical approach to current literature on the subject. In this respect, it assumes a bit of prior knowledge, but is still accessible through the use of concrete examples.

Budd, R. W., Thorp, R.K., & Donohew, L. (1967). Content analysis of communications . New York: Macmillan Company.

Although published in 1967, the decision of the authors to focus on recent trends in content analysis keeps their insights relevant even to modern audiences. The book focuses on specific uses and methods of content analysis with an emphasis on its potential for researching human behavior. It is also geared toward the beginning researcher and breaks down the process of designing a content analysis study into 6 steps that are outlined in successive chapters. A useful annotated bibliography is included.

Carley, K. (1992). Coding choices for textual analysis: A comparison of content analysis and map analysis. Unpublished Working Paper.

Comparison of the coding choices necessary to conceptual analysis and relational analysis, especially focusing on cognitive maps. Discusses concept coding rules needed for sufficient reliability and validity in a Content Analysis study. In addition, several pitfalls common to texts are discussed.

Carley, K. (1990). Content analysis. In R.E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Edinburgh: Pergamon Press.

Quick, yet detailed, overview of the different methodological kinds of Content Analysis. Carley breaks down her paper into five sections, including: Conceptual Analysis, Procedural Analysis, Relational Analysis, Emotional Analysis and Discussion. Also included is an excellent and comprehensive Content Analysis reference list.

Carley, K. (1989). Computer analysis of qualitative data . Pittsburgh, PA: Carnegie Mellon University.

Presents graphic, illustrated representations of computer based approaches to content analysis.

Carley, K. (1992). MECA . Pittsburgh, PA: Carnegie Mellon University.

A resource guide explaining the fifteen routines that compose the Map Extraction Comparison and Analysis (MECA) software program. Lists the source file, input and out files, and the purpose for each routine.

Carney, T. F. (1972). Content analysis: A technique for systematic inference from communications . Winnipeg, Canada: University of Manitoba Press.

This book introduces and explains in detail the concept and practice of content analysis. Carney defines it; traces its history; discusses how content analysis works and its strengths and weaknesses; and explains through examples and illustrations how one goes about doing a content analysis.

de Sola Pool, I. (1959). Trends in content analysis . Urbana, Ill: University of Illinois Press.

The 1959 collection of papers begins by differentiating quantitative and qualitative approaches to content analysis, and then details facets of its uses in a wide variety of disciplines: from linguistics and folklore to biography and history. Includes a discussion on the selection of relevant methods and representational models.

Duncan, D. F. (1989). Content analysis in health educaton research: An introduction to purposes and methods. Heatlth Education, 20 (7).

This article proposes using content analysis as a research technique in health education. A review of literature relating to applications of this technique and a procedure for content analysis are presented.

Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc.

This book primarily focuses on the Gottschalk-Gleser method of content analysis, and its application as a method of measuring psychological dimensions of children and adults via the content and form analysis of their verbal behavior, using the grammatical clause as the basic unit of communication for carrying semantic messages generated by speakers or writers.

Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications.

This is one of the most widely quoted resources in many of the current studies of Content Analysis. Recommended as another good, basic resource, as Krippendorf presents the major issues of Content Analysis in much the same way as Weber (1975).

Moeller, L. G. (1963). An introduction to content analysis--including annotated bibliography . Iowa City: University of Iowa Press.

A good reference for basic content analysis. Discusses the options of sampling, categories, direction, measurement, and the problems of reliability and validity in setting up a content analysis. Perhaps better as a historical text due to its age.

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press.

Billed by its authors as "the first book to be devoted primarily to content analysis systems for assessment of the characteristics of individuals, groups, or historical periods from their verbal materials." The text includes manuals for using various systems, theory, and research regarding the background of systems, as well as practice materials, making the book both a reference and a handbook.

Solomon, M. (1993). Content analysis: a potent tool in the searcher's arsenal. Database, 16 (2), 62-67.

Online databases can be used to analyze data, as well as to simply retrieve it. Online-media-source content analysis represents a potent but little-used tool for the business searcher. Content analysis benchmarks useful to advertisers include prominence, offspin, sponsor affiliation, verbatims, word play, positioning and notational visibility.

Weber, R. P. (1990). Basic content analysis, second edition . Newbury Park, CA: Sage Publications.

Good introduction to Content Analysis. The first chapter presents a quick overview of Content Analysis. The second chapter discusses content classification and interpretation, including sections on reliability, validity, and the creation of coding schemes and categories. Chapter three discusses techniques of Content Analysis, using a number of tables and graphs to illustrate the techniques. Chapter four examines issues in Content Analysis, such as measurement, indication, representation and interpretation.

Examples of Content Analysis

Adams, W., & Shriebman, F. (1978). Television network news: Issues in content research . Washington, DC: George Washington University Press.

A fairly comprehensive application of content analysis to the field of television news reporting. The books tripartite division discusses current trends and problems with news criticism from a content analysis perspective, four different content analysis studies of news media, and makes recommendations for future research in the area. Worth a look by anyone interested in mass communication research.

Auter, P. J., & Moore, R. L. (1993). Buying from a friend: a content analysis of two teleshopping programs. Journalism Quarterly, 70 (2), 425-437.

A preliminary study was conducted to content-analyze random samples of two teleshopping programs, using a measure of content interactivity and a locus of control message index.

Barker, S. P. (???) Fame: A content analysis study of the American film biography. Ohio State University. Thesis.

Barker examined thirty Oscar-nominated films dating from 1929 to 1979 using O.J. Harvey Belief System and the Kohlberg's Moral Stages to determine whether cinema heroes were positive role models for fame and success or morally ambiguous celebrities. Content analysis was successful in determining several trends relative to the frequency and portrayal of women in film, the generally high ethical character of the protagonists, and the dogmatic, close-minded nature of film antagonists.

Bernstein, J. M. & Lacy, S. (1992). Contextual coverage of government by local television news. Journalism Quarterly, 69 (2), 329-341.

This content analysis of 14 local television news operations in five markets looks at how local TV news shows contribute to the marketplace of ideas. Performance was measured as the allocation of stories to types of coverage that provide the context about events and issues confronting the public.

Blaikie, A. (1993). Images of age: a reflexive process. Applied Ergonomics, 24 (1), 51-58.

Content analysis of magazines provides a sharp instrument for reflecting the change in stereotypes of aging over past decades.

Craig, R. S. (1992). The effect of day part on gender portrayals in television commercials: a content analysis. Sex Roles: A Journal of Research, 26 (5-6), 197-213.

Gender portrayals in 2,209 network television commercials were content analyzed. To compare differences between three day parts, the sample was chosen from three time periods: daytime, evening prime time, and weekend afternoon sportscasts. The results indicate large and consistent differences in the way men and women are portrayed in these three day parts, with almost all comparisons reaching significance at the .05 level. Although ads in all day parts tended to portray men in stereotypical roles of authority and dominance, those on weekends tended to emphasize escape form home and family. The findings of earlier studies which did not consider day part differences may now have to be reevaluated.

Dillon, D. R. et al. (1992). Article content and authorship trends in The Reading Teacher, 1948-1991. The Reading Teacher, 45 (5), 362-368.

The authors explore changes in the focus of the journal over time.

Eberhardt, EA. (1991). The rhetorical analysis of three journal articles: The study of form, content, and ideology. Ft. Collins, CO: Colorado State University.

Eberhardt uses content analysis in this thesis paper to analyze three journal articles that reported on President Ronald Reagan's address in which he responded to the Tower Commission report concerning the IranContra Affair. The reports concentrated on three rhetorical elements: idea generation or content; linguistic style or choice of language; and the potential societal effect of both, which Eberhardt analyzes, along with the particular ideological orientation espoused by each magazine.

Ellis, B. G. & Dick, S. J. (1996). 'Who was 'Shadow'? The computer knows: applying grammar-program statistics in content analyses to solve mysteries about authorship. Journalism & Mass Communication Quarterly, 73 (4), 947-963.

This study's objective was to employ the statistics-documentation portion of a word-processing program's grammar-check feature as a final, definitive, and objective tool for content analyses - used in tandem with qualitative analyses - to determine authorship. Investigators concluded there was significant evidence from both modalities to support their theory that Henry Watterson, long-time editor of the Louisville Courier-Journal, probably was the South's famed Civil War correspondent "Shadow" and to rule out another prime suspect, John H. Linebaugh of the Memphis Daily Appeal. Until now, this Civil War mystery has never been conclusively solved, puzzling historians specializing in Confederate journalism.

Gottschalk, L. A., Stein, M. K. & Shapiro, D.H. (1997). The application of computerized content analysis in a psychiatric outpatient clinic. Journal of Clinical Psychology, 53 (5) , 427-442.

Twenty-five new psychiatric outpatients were clinically evaluated and were administered a brief psychological screening battery which included measurements of symptoms, personality, and cognitive function. Included in this assessment procedure were the Gottschalk-Gleser Content Analysis Scales on which scores were derived from five minute speech samples by means of an artificial intelligence-based computer program. The use of this computerized content analysis procedure for initial, rapid diagnostic neuropsychiatric appraisal is supported by this research.

Graham, J. L., Kamins, M. A., & Oetomo, D. S. (1993). Content analysis of German and Japanese advertising in print media from Indonesia, Spain, and the United States. Journal of Advertising , 22 (2), 5-16.

The authors analyze informational and emotional content in print advertisements in order to consider how home-country culture influences firms' marketing strategies and tactics in foreign markets. Research results provided evidence contrary to the original hypothesis that home-country culture would influence ads in each of the target countries.

Herzog, A. (1973). The B.S. Factor: The theory and technique of faking it in America . New York: Simon and Schuster.

Herzog takes a look at the rhetoric of American culture using content analysis to point out discrepancies between intention and reality in American society. The study reveals, albeit in a comedic tone, how double talk and "not quite lies" are pervasive in our culture.

Horton, N. S. (1986). Young adult literature and censorship: A content analysis of seventy-eight young adult books . Denton, TX: North Texas State University.

The purpose of Horton's content analysis was to analyze a representative seventy-eight current young adult books to determine the extent to which they contain items which are objectionable to would-be censors. Seventy-eight books were identified which fit the criteria of popularity and literary quality. Each book was analyzed for, and tallied for occurrence of, six categories, including profanity, sex, violence, parent conflict, drugs and condoned bad behavior.

Isaacs, J. S. (1984). A verbal content analysis of the early memories of psychiatric patients . Berkeley: California School of Professional Psychology.

Isaacs did a content analysis investigation on the relationship between words and phrases used in early memories and clinical diagnosis. His hypothesis was that in conveying their early memories schizophrenic patients tend to use an identifiable set of words and phrases more frequently than do nonpatients and that schizophrenic patients use these words and phrases more frequently than do patients with major affective disorders.

Jean Lee, S. K. & Hwee Hoon, T. (1993). Rhetorical vision of men and women managers in Singapore. Human Relations, 46 (4), 527-542.

A comparison of media portrayal of male and female managers' rhetorical vision in Singapore is made. Content analysis of newspaper articles used to make this comparison also reveals the inherent conflicts that women managers have to face. Purposive and multi-stage sampling of articles are utilized.

Kaur-Kasior, S. (1987). The treatment of culture in greeting cards: A content analysis . Bowling Green, OH: Bowling Green State University.

Using six historical periods dating from 1870 to 1987, this content analysis study attempted to determine what structural/cultural aspects of American society were reflected in greeting cards. The study determined that the size of cards increased over time, included more pages, and had animals and flowers as their most dominant symbols. In addition, white was the most common color used. Due to habituation and specialization, says the author, greeting cards have become institutionalized in American culture.

Koza, J. E. (1992). The missing males and other gender-related issues in music education: A critical analysis of evidence from the Music Supervisor's Journal, 1914-1924. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

The goal of this study was to identify all educational issues that would today be explicitly gender related and to analyze the explanations past music educators gave for the existence of gender-related problems. A content analysis of every gender-related reference was undertaken, finding that the current preoccupation with males in music education has a long history and that little has changed since the early part of this century.

Laccinole, M. D. (1982). Aging and married couples: A language content analysis of a conversational and expository speech task . Eugene, OR: University of Oregon.

Using content analysis, this paper investigated the relationship of age to the use of the grammatical categories, and described the differences in the usage of these grammatical categories in a conversation and expository speech task by fifty married couples. The subjects Laccinole used in his analysis were Caucasian, English speaking, middle class, ranged in ages from 20 to 83 years of age, were in good health and had no history of communication disorders.
Laffal, J. (1995). A concept analysis of Jonathan Swift's 'A Tale of a Tub' and 'Gulliver's Travels.' Computers and Humanities, 29 (5), 339-362.
In this study, comparisons of concept profiles of "Tub," "Gulliver," and Swift's own contemporary texts, as well as a composite text of 18th century writers, reveal that "Gulliver" is conceptually different from "Tub." The study also discovers that the concepts and words of these texts suggest two strands in Swift's thinking.

Lewis, S. M. (1991). Regulation from a deregulatory FCC: Avoiding discursive dissonance. Masters Thesis, Fort Collins, CO: Colorado State University.

This thesis uses content analysis to examine inconsistent statements made by the Federal Communications Commission (FCC) in its policy documents during the 1980s. Lewis analyzes positions set forth by the FCC in its policy statements and catalogues different strategies that can be used by speakers to be or to appear consistent, as well as strategies to avoid inconsistent speech or discursive dissonance.

Norton, T. L. (1987). The changing image of childhood: A content analysis of Caldecott Award books. Los Angeles: University of South Carolina.

Content analysis was conducted on 48 Caldecott Medal Recipient books dating from 1938 to 1985 to determine whether the reflect the idea that the social perception of childhood has altered since the early 1960's. The results revealed an increasing "loss of childhood innocence," as well as a general sentimentality for childhood pervasive in the texts. Suggests further study of children's literature to confirm the validity of such study.

O'Dell, J. W. & Weideman, D. (1993). Computer content analysis of the Schreber case. Journal of Clinical Psychology, 49 (1), 120-125.

An example of the application of content analysis as a means of recreating a mental model of the psychology of an individual.

Pratt, C. A. & Pratt, C. B. (1995). Comparative content analysis of food and nutrition advertisements in Ebony, Essence, and Ladies' Home Journal. Journal of Nutrition Education, 27 (1), 11-18.

This study used content analysis to measure the frequencies and forms of food, beverage, and nutrition advertisements and their associated health-promotional message in three U.S. consumer magazines during two 3-year periods: 1980-1982 and 1990-1992. The study showed statistically significant differences among the three magazines in both frequencies and types of major promotional messages in the advertisements. Differences between the advertisements in Ebony and Essence, the readerships of which were primarily African-American, and those found in Ladies Home Journal were noted, as were changes in the two time periods. Interesting tie in to ethnographic research studies?
Riffe, D., Lacy, S., & Drager, M. W. (1996). Sample size in content analysis of weekly news magazines. Journalism & Mass Communication Quarterly,73 (3), 635-645.
This study explores a variety of approaches to deciding sample size in analyzing magazine content. Having tested random samples of size six, eight, ten, twelve, fourteen, and sixteen issues, the authors show that a monthly stratified sample of twelve issues is the most efficient method for inferring to a year's issues.

Roberts, S. K. (1987). A content analysis of how male and female protagonists in Newbery Medal and Honor books overcome conflict: Incorporating a locus of control framework. Fayetteville, AR: University of Arkansas.

The purpose of this content analysis was to analyze Newbery Medal and Honor books in order to determine how male and female protagonists were assigned behavioral traits in overcoming conflict as it relates to an internal or external locus of control schema. Roberts used all, instead of just a sample, of the fictional Newbery Medal and Honor books which met his study's criteria. A total of 120 male and female protagonists were categorized, from Newbery books dating from 1922 to 1986.

Schneider, J. (1993). Square One TV content analysis: Final report . New York: Children's Television Workshop.

This report summarizes the mathematical and pedagogical content of the 230 programs in the Square One TV library after five seasons of production, relating that content to the goals of the series which were to make mathematics more accessible, meaningful, and interesting to the children viewers.

Smith, T. E., Sells, S. P., and Clevenger, T. Ethnographic content analysis of couple and therapist perceptions in a reflecting team setting. The Journal of Marital and Family Therapy, 20 (3), 267-286.

An ethnographic content analysis was used to examine couple and therapist perspectives about the use and value of reflecting team practice. Postsession ethnographic interviews from both couples and therapists were examined for the frequency of themes in seven categories that emerged from a previous ethnographic study of reflecting teams. Ethnographic content analysis is briefly contrasted with conventional modes of quantitative content analysis to illustrate its usefulness and rationale for discovering emergent patterns, themes, emphases, and process using both inductive and deductive methods of inquiry.

Stahl, N. A. (1987). Developing college vocabulary: A content analysis of instructional materials. Reading, Research and Instruction , 26 (3).

This study investigates the extent to which the content of 55 college vocabulary texts is consistent with current research and theory on vocabulary instruction. It recommends less reliance on memorization and more emphasis on deep understanding and independent vocabulary development.

Swetz, F. (1992). Fifteenth and sixteenth century arithmetic texts: What can we learn from them? Science and Education, 1 (4).

Surveys the format and content of 15th and 16th century arithmetic textbooks, discussing the types of problems that were most popular in these early texts and briefly analyses problem contents. Notes the residual educational influence of this era's arithmetical and instructional practices.
Walsh, K., et al. (1996). Management in the public sector: a content analysis of journals. Public Administration 74 (2), 315-325.
The popularity and implementaion of managerial ideas from 1980 to 1992 are examined through the content of five journals revolving on local government, health, education and social service. Contents were analyzed according to commercialism, user involvement, performance evaluation, staffing, strategy and involvement with other organizations. Overall, local government showed utmost involvement with commercialism while health and social care articles were most concerned with user involvement.

For Further Reading

Abernethy, A. M., & Franke, G. R. (1996).The information content of advertising: a meta-analysis. Journal of Advertising, Summer 25 (2) , 1-18.

Carley, K., & Palmquist, M. (1992). Extracting, representing and analyzing mental models. Social Forces , 70 (3), 601-636.

Fan, D. (1988). Predictions of public opinion from the mass media: Computer content analysis and mathematical modeling . New York, NY: Greenwood Press.

Franzosi, R. (1990). Computer-assisted coding of textual data: An application to semantic grammars. Sociological Methods and Research, 19 (2), 225-257.

McTavish, D.G., & Pirro, E. (1990) Contextual content analysis. Quality and Quantity , 24 , 245-265.

Palmquist, M. E. (1990). The lexicon of the classroom: language and learning in writing class rooms . Doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA.

Palmquist, M. E., Carley, K.M., and Dale, T.A. (1997). Two applications of automated text analysis: Analyzing literary and non-literary texts. In C. Roberts (Ed.), Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Tanscripts. Hillsdale, NJ: Lawrence Erlbaum Associates.

Roberts, C.W. (1989). Other than counting words: A linguistic approach to content analysis. Social Forces, 68 , 147-177.

Issues in Content Analysis

Jolliffe, L. (1993). Yes! More content analysis! Newspaper Research Journal , 14 (3-4), 93-97.

The author responds to an editorial essay by Barbara Luebke which criticizes excessive use of content analysis in newspaper content studies. The author points out the positive applications of content analysis when it is theory-based and utilized as a means of suggesting how or why the content exists, or what its effects on public attitudes or behaviors may be.

Kang, N., Kara, A., Laskey, H. A., & Seaton, F. B. (1993). A SAS MACRO for calculating intercoder agreement in content analysis. Journal of Advertising, 22 (2), 17-28.

A key issue in content analysis is the level of agreement across the judgments which classify the objects or stimuli of interest. A review of articles published in the Journal of Advertising indicates that many authors are not fully utilizing recommended measures of intercoder agreement and thus may not be adequately establishing the reliability of their research. This paper presents a SAS MACRO which facilitates the computation of frequently recommended indices of intercoder agreement in content analysis.
Lacy, S. & Riffe, D. (1996). Sampling error and selecting intercoder reliability samples for nominal content categories. Journalism & Mass Communication Quarterly, 73 (4) , 693-704.
This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders.

Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and constructed week sampling in newspaper content analysis. Journalism Quarterly, 70 (1), 133-139.

This study compares 20 sets each of samples for four different sizes using simple random, constructed week and consecutive day samples of newspaper content. Comparisons of sample efficiency, based on the percentage of sample means in each set of 20 falling within one or two standard errors of the population mean, show the superiority of constructed week sampling.

Thomas, S. (1994). Artifactual study in the analysis of culture: A defense of content analysis in a postmodern age. Communication Research, 21 (6), 683-697.

Although both modern and postmodern scholars have criticized the method of content analysis with allegations of reductionism and other epistemological limitations, it is argued here that these criticisms are ill founded. In building and argument for the validity of content analysis, the general value of artifact or text study is first considered.

Zollars, C. (1994). The perils of periodical indexes: Some problems in constructing samples for content analysis and culture indicators research. Communication Research, 21 (6), 698-714.

The author examines problems in using periodical indexes to construct research samples via the use of content analysis and culture indicator research. Issues of historical and idiosyncratic changes in index subject category heading and subheadings make article headings potentially misleading indicators. Index subject categories are not necessarily invalid as a result; nevertheless, the author discusses the need to test for category longevity, coherence, and consistency over time, and suggests the use of oversampling, cross-references, and other techniques as a means of correcting and/or compensating for hidden inaccuracies in classification, and as a means of constructing purposive samples for analytic comparisons.

Busch, Carol, Paul S. De Maret, Teresa Flynn, Rachel Kellum, Sheri Le, Brad Meyers, Matt Saunders, Robert White, and Mike Palmquist. (2005). Content Analysis. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=61

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Conduct Content Analysis

How to Conduct Content Analysis

6-minute read

  • 2nd April 2023

If you are a researcher in marketing, advertising, or academics, you know the importance and challenges of conducting effective and reliable content analysis. In this article, we’re breaking down content analysis into eight steps that will help yield credible and reliable research results.

What Is Content Analysis?

We use content analysis as a research method to systematically analyze and categorize qualitative data, such as written or visual content, and identify patterns, themes, and meanings. Content analysis involves developing a coding framework or a set of categories to systematically analyze the content. Coding categories can be qualitative or quantitative and are based on predefined criteria.

We use content analysis in various research purposes and fields, such as communication studies, media studies, social sciences, marketing , and psychology. We can do the analysis manually or with the help of software tools, and the task requires attention to detail and rigorous examination for valid and reliable results.

Why Is Content Analysis Important?

Content analysis is an important research method that can provide valuable insights into qualitative data , help uncover patterns and themes, enhance the rigor of research findings, and inform decision-making and policy development in various fields. We use content analysis in different contexts:

  • Systematic approach to data analysis: Content analysis provides a systematic and structured approach to analyzing qualitative data, such as written or visual content. The analysis allows researchers to examine and categorize data objectively based on predefined criteria. Ensuring consistency and reliability is part of the analysis process.
  • Identifying patterns and themes: By systematically coding and analyzing content, researchers can identify common themes, recurring patterns, and trends that may not be immediately apparent through qualitative observation alone. These actions can help reveal underlying meanings, messages, or concepts that may be important for understanding the content.
  • Quantifying qualitative data: By assigning codes or categories to specific aspects of the content, researchers can quantify the frequency, distribution, or relationships among different categories, providing a quantitative basis for analysis and interpretation.
  • Exploring social and cultural representations: Content analysis can help uncover and analyze social and cultural representations in texts, media content, or other forms of communication. The analysis can shed light on how certain groups, communities, or cultures are portrayed, represented, or framed in different types of content, providing insights into issues such as media bias, cultural norms, and social dynamics.
  • Informing decision-making and policy: Content analysis findings can inform decision-making and policy development in various contexts. For example, content analysis of social media data can provide insights into public opinion, sentiment, or trends on important social or political issues, and these insights, in turn, can inform policy decisions. We can also use content analysis to assess the portrayal of certain groups or issues in the media; these depictions can have implications for media policy or advocacy efforts.

How to Conduct Content Analysis: Eight Steps

Step 1: define your research questions and objectives.

Your research questions should identify the issues you will address in your study and will help you plan your investigation. Following your research questions, your research objectives should clearly state the steps you will take to fulfill the aim(s) of your research.

Making sure you have clearly defined questions and objectives is the foundational step of any research. Therefore, take the time to clearly identify what they are before you begin your content analysis.

Step 2: Select Your Content

Decide on the specific content you want to analyze. It could be documents, texts, images, videos, social media posts, or any other form of content that is relevant to your research question.

Step 3: Develop Coding Categories

Create a coding framework or a set of categories that you’ll use to analyze the content systematically. Coding categories are the labels or codes that you’ll assign to different aspects of the content. They should be mutually exclusive and exhaustive, meaning that each piece of content should fit into one and only one category.

Coding categories can be qualitative or quantitative, depending on the nature of the data and the research goals. Qualitative coding categories typically involve assigning labels or codes to different themes, concepts, or patterns that emerge from the content. Quantitative coding categories, on the other hand, often involve counting the frequency or occurrence of specific features or characteristics in the content.

Some examples of coding categories include:

●  Themes or concepts

●  Sentiments or emotions

●  Actors or sources

●  Frames or perspectives

●  Visual elements

●  Time periods

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

Step 4: Create a Coding Guide

Develop a detailed coding guide that provides instructions on how to apply the coding categories to the content. The coding guide should include definitions of each category, examples, and guidelines for making coding decisions. These elements will ensure consistency and reliability in your analysis.

Step 5: Pilot-Test and Refine Coding Categories

Conduct a pilot test by coding a small subset of your content to ascertain the effectiveness and clarity of your coding categories and coding guide. Based on the results of the pilot test, refine your coding categories and guide as needed.

Step 6: Conduct the Content Analysis

Once you’ve finalized your coding categories and coding guide, apply them to the rest of your content. Doing this involves systematically reviewing and coding each piece of content according to the coding categories and guidelines in your coding guide. You can perform these tasks manually or by using software tools designed for content analysis. Some software tools you might consider include:

●  NVivo

●  MAXQDA

●  Dedoose

●  ATLAS.ti

●  QDA Miner

●  Coding Analysis Toolkit (CAT)

Step 7: Analyze and Interpret the Data

Once all the content has been coded, analyze the coded data to identify patterns, trends, and themes. This task may involve quantitative analysis, such as calculating frequencies or percentages of coded categories, as well as qualitative analysis, such as identifying recurring themes or interpreting the meaning behind the codes.

Step 8: Draw Conclusions and Report Findings

Based on your analysis, draw conclusions and report your findings . Clearly explain the results of your content analysis and their connection to your research questions or objectives. Use evidence from your coded data to support your conclusions.

Additionally, remember to document your coding process thoroughly, keep track of any decisions or changes made during the analysis, and be mindful of potential biases or limitations in your coding. Content analysis requires careful attention to detail and rigorous examination to ensure valid and reliable results.

Conclusions

Content analysis is a crucial research methodology for an array of fields. While the analysis can be time-consuming and often expensive to conduct, the results are invaluable to the validity and efficacy of your research.

Interested in learning about other research methodologies and techniques? Check our Research page to learn more.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

5-minute read

Free Email Newsletter Template (2024)

Promoting a brand means sharing valuable insights to connect more deeply with your audience, and...

How to Write a Nonprofit Grant Proposal

If you’re seeking funding to support your charitable endeavors as a nonprofit organization, you’ll need...

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • How it works

researchprospect post subheader

What is Content Analysis – Steps & Examples

Published by Alvin Nicolas at August 16th, 2021 , Revised On August 29, 2023

“The content analysis identifies specific words, patterns, concepts, themes, phrases, characters, or sentences within the recorded communication content.”

To conduct content analysis, you need to gather data from multiple sources; it can be anything or any form of data, including text, audio, or videos.

Depending on the requirements of your analysis, you may have to use a  primary or secondary form of data , including:

Videos Transcripts Images Newspaper Books Literature Biographies Documents Oral statements/conversations Text books Encyclopedia Newspapers Periodicals Social media posts Articles

The Purpose of Content Analysis

There are so many objectives of content analysis. Some fundamental objectives are given below.

  • To simplify the content.
  • To get a clear, in-depth meaning of the language.
  • To identify the uses of language.
  • To know the impact of language on society.
  • To find out the association of the language with cultures, interpersonal relationships, and communication.
  • To gain an in-depth understanding of the concept.
  • To find out the context, behaviour, and response of the speaker.
  • To analyse the trends and association between the text and multimedia.

When to Use Content Analysis? 

There are many uses of the content analysis; some of them are listed below:

The content analysis is used.

  • To represent the content precisely, breaking it into short form.
  • To describe the characteristics of the content.
  • To support an argument.
  • It is used in many walks of life, including marketing, media, literature, etc.
  • It is used for extracting essential information from a large amount of data.

Types of Content Analysis

Content analysis is a broad concept, and it has various types depending on various fields. However, people from all walks of life use it at their convenience. Some of the popular methods are given below:

Sr. no Types Definition Example
1 Relational Analysis It helps to understand the association between concepts in humans. What other words are used next to the word  it’s synonyms such as  is used in the communication?

What kind of meaning is produced by this group of words?

2 Unobtrusive Research It’s a method of studying social behaviour without collecting data directly from the subject group Durkheim’s analysis of suicide
3 Conceptual analysis It analyses the existence and frequency of concepts in human communication. Smoking can have adverse   on your health.

Here you can find out how many times the word  its synonyms such as  communication.

Confused between qualitative and quantitative methods of data analysis? No idea what discourse and content analysis are?

We hear you.

  • Whether you want a full dissertation written or need help forming a dissertation proposal, we can help you with both.
  • Get different dissertation services at ResearchProspect and score amazing grades!

Advantages and Disadvantages of Content Analysis

Content analysis has so many benefits, which are given below.

Content analysis:

  • Offers both qualitative and quantitative analysis of the communication.
  • Provides an in-depth understanding of the content by making it precise.
  • Enables us to understand the context and perception of the speaker.
  • Provides insight into complex models of human thoughts and language use.
  • Provides historical/cultural insight.
  • It can be applied at any given time, place, and people.
  • It helps to learn any language, its origin, and association with society and culture

Disadvantages

There are also some disadvantages of using the method of content analysis which are given below:

  • is very time-consuming.
  • Cannot interpret a large amount of data accurately and is subjected to increased error.
  • Cannot be computerised easily.

How to Conduct a Content Analysis?

If you want to conduct the content analysis, so here are some steps that you have to follow for that purpose. Those steps are given below.

Develop a Research Question and Select the Content

It’s essential to have a  research question to proceed with your study.  After selecting your research question, you need to find out the relevant resources to analyse.

Example:  If you want to find out the impact of plagiarism on the credibility of the authors. You can examine the relevant materials available on the topic from the internet, newspapers, and books published during the past 5-10 years.

Could you read it Thoroughly?

At this point, you have to read the content thoroughly until you understand it. 

Condensation

It would help if you broke the text into smaller portions for clear interpretation. In short, you have to create categories or smaller text from a large amount of given data.

The unit of analysis  is the basic unit of text to be classified. It can be a word, phrase, a theme, a plot, a newspaper article.

Code the Content

It takes a long to go through the textual data. Coding is a way of tagging the data and organising it into a sequence of symbols, numbers, and letters to highlight the relevant points. At this point, you have to draw meanings from those condensed parts. You have to understand the meaning and context of the text and the speaker clearly. 

Analyse and Interpret the Data

You can use statistical analysis to analyse the data. It is a method of collecting, analysing, and interpreting ample data to discover underlying patterns and details. Statistics are used in every field to make better decisions. It would help if you aimed to retain the meaning of the content while making it precise.

Frequently Asked Questions

How to perform content analysis.

To perform content analysis:

  • Define research objectives.
  • Select a representative sample.
  • Develop coding categories.
  • Analyze content systematically.
  • Apply coding to data.
  • Interpret results to draw insights about themes, patterns, and meanings.

You May Also Like

This post provides the key disadvantages of secondary research so you know the limitations of secondary research before making a decision.

Struggling to figure out “whether I should choose primary research or secondary research in my dissertation?” Here are some tips to help you decide.

Disadvantages of primary research – It can be expensive, time-consuming and take a long time to complete if it involves face-to-face contact with customers.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Do Thematic Analysis | Step-by-Step Guide & Examples

How to Do Thematic Analysis | Step-by-Step Guide & Examples

Published on September 6, 2019 by Jack Caulfield . Revised on June 22, 2023.

Thematic analysis is a method of analyzing qualitative data . It is usually applied to a set of texts, such as an interview or transcripts . The researcher closely examines the data to identify common themes – topics, ideas and patterns of meaning that come up repeatedly.

There are various approaches to conducting thematic analysis, but the most common form follows a six-step process: familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up. Following this process can also help you avoid confirmation bias when formulating your analysis.

This process was originally developed for psychology research by Virginia Braun and Victoria Clarke . However, thematic analysis is a flexible method that can be adapted to many different kinds of research.

Table of contents

When to use thematic analysis, different approaches to thematic analysis, step 1: familiarization, step 2: coding, step 3: generating themes, step 4: reviewing themes, step 5: defining and naming themes, step 6: writing up, other interesting articles.

Thematic analysis is a good approach to research where you’re trying to find out something about people’s views, opinions, knowledge, experiences or values from a set of qualitative data – for example, interview transcripts , social media profiles, or survey responses .

Some types of research questions you might use thematic analysis to answer:

  • How do patients perceive doctors in a hospital setting?
  • What are young women’s experiences on dating sites?
  • What are non-experts’ ideas and opinions about climate change?
  • How is gender constructed in high school history teaching?

To answer any of these questions, you would collect data from a group of relevant participants and then analyze it. Thematic analysis allows you a lot of flexibility in interpreting the data, and allows you to approach large data sets more easily by sorting them into broad themes.

However, it also involves the risk of missing nuances in the data. Thematic analysis is often quite subjective and relies on the researcher’s judgement, so you have to reflect carefully on your own choices and interpretations.

Pay close attention to the data to ensure that you’re not picking up on things that are not there – or obscuring things that are.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Once you’ve decided to use thematic analysis, there are different approaches to consider.

There’s the distinction between inductive and deductive approaches:

  • An inductive approach involves allowing the data to determine your themes.
  • A deductive approach involves coming to the data with some preconceived themes you expect to find reflected there, based on theory or existing knowledge.

Ask yourself: Does my theoretical framework give me a strong idea of what kind of themes I expect to find in the data (deductive), or am I planning to develop my own framework based on what I find (inductive)?

There’s also the distinction between a semantic and a latent approach:

  • A semantic approach involves analyzing the explicit content of the data.
  • A latent approach involves reading into the subtext and assumptions underlying the data.

Ask yourself: Am I interested in people’s stated opinions (semantic) or in what their statements reveal about their assumptions and social context (latent)?

After you’ve decided thematic analysis is the right method for analyzing your data, and you’ve thought about the approach you’re going to take, you can follow the six steps developed by Braun and Clarke .

The first step is to get to know our data. It’s important to get a thorough overview of all the data we collected before we start analyzing individual items.

This might involve transcribing audio , reading through the text and taking initial notes, and generally looking through the data to get familiar with it.

Next up, we need to code the data. Coding means highlighting sections of our text – usually phrases or sentences – and coming up with shorthand labels or “codes” to describe their content.

Let’s take a short example text. Say we’re researching perceptions of climate change among conservative voters aged 50 and up, and we have collected data through a series of interviews. An extract from one interview looks like this:

Coding qualitative data
Interview extract Codes
Personally, I’m not sure. I think the climate is changing, sure, but I don’t know why or how. People say you should trust the experts, but who’s to say they don’t have their own reasons for pushing this narrative? I’m not saying they’re wrong, I’m just saying there’s reasons not to 100% trust them. The facts keep changing – it used to be called global warming.

In this extract, we’ve highlighted various phrases in different colors corresponding to different codes. Each code describes the idea or feeling expressed in that part of the text.

At this stage, we want to be thorough: we go through the transcript of every interview and highlight everything that jumps out as relevant or potentially interesting. As well as highlighting all the phrases and sentences that match these codes, we can keep adding new codes as we go through the text.

After we’ve been through the text, we collate together all the data into groups identified by code. These codes allow us to gain a a condensed overview of the main points and common meanings that recur throughout the data.

Next, we look over the codes we’ve created, identify patterns among them, and start coming up with themes.

Themes are generally broader than codes. Most of the time, you’ll combine several codes into a single theme. In our example, we might start combining codes into themes like this:

Turning codes into themes
Codes Theme
Uncertainty
Distrust of experts
Misinformation

At this stage, we might decide that some of our codes are too vague or not relevant enough (for example, because they don’t appear very often in the data), so they can be discarded.

Other codes might become themes in their own right. In our example, we decided that the code “uncertainty” made sense as a theme, with some other codes incorporated into it.

Again, what we decide will vary according to what we’re trying to find out. We want to create potential themes that tell us something helpful about the data for our purposes.

Now we have to make sure that our themes are useful and accurate representations of the data. Here, we return to the data set and compare our themes against it. Are we missing anything? Are these themes really present in the data? What can we change to make our themes work better?

If we encounter problems with our themes, we might split them up, combine them, discard them or create new ones: whatever makes them more useful and accurate.

For example, we might decide upon looking through the data that “changing terminology” fits better under the “uncertainty” theme than under “distrust of experts,” since the data labelled with this code involves confusion, not necessarily distrust.

Now that you have a final list of themes, it’s time to name and define each of them.

Defining themes involves formulating exactly what we mean by each theme and figuring out how it helps us understand the data.

Naming themes involves coming up with a succinct and easily understandable name for each theme.

For example, we might look at “distrust of experts” and determine exactly who we mean by “experts” in this theme. We might decide that a better name for the theme is “distrust of authority” or “conspiracy thinking”.

Finally, we’ll write up our analysis of the data. Like all academic texts, writing up a thematic analysis requires an introduction to establish our research question, aims and approach.

We should also include a methodology section, describing how we collected the data (e.g. through semi-structured interviews or open-ended survey questions ) and explaining how we conducted the thematic analysis itself.

The results or findings section usually addresses each theme in turn. We describe how often the themes come up and what they mean, including examples from the data as evidence. Finally, our conclusion explains the main takeaways and shows how the analysis has answered our research question.

In our example, we might argue that conspiracy thinking about climate change is widespread among older conservative voters, point out the uncertainty with which many voters view the issue, and discuss the role of misinformation in respondents’ perceptions.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Discourse analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, June 22). How to Do Thematic Analysis | Step-by-Step Guide & Examples. Scribbr. Retrieved August 28, 2024, from https://www.scribbr.com/methodology/thematic-analysis/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, what is qualitative research | methods & examples, inductive vs. deductive research approach | steps & examples, critical discourse analysis | definition, guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Research Paper – Structure, Examples and Writing Guide

Research Paper – Structure, Examples and Writing Guide

Table of Contents

Research Paper

Research Paper

Definition:

Research Paper is a written document that presents the author’s original research, analysis, and interpretation of a specific topic or issue.

It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new knowledge or insights to a particular field of study, and to demonstrate the author’s understanding of the existing literature and theories related to the topic.

Structure of Research Paper

The structure of a research paper typically follows a standard format, consisting of several sections that convey specific information about the research study. The following is a detailed explanation of the structure of a research paper:

The title page contains the title of the paper, the name(s) of the author(s), and the affiliation(s) of the author(s). It also includes the date of submission and possibly, the name of the journal or conference where the paper is to be published.

The abstract is a brief summary of the research paper, typically ranging from 100 to 250 words. It should include the research question, the methods used, the key findings, and the implications of the results. The abstract should be written in a concise and clear manner to allow readers to quickly grasp the essence of the research.

Introduction

The introduction section of a research paper provides background information about the research problem, the research question, and the research objectives. It also outlines the significance of the research, the research gap that it aims to fill, and the approach taken to address the research question. Finally, the introduction section ends with a clear statement of the research hypothesis or research question.

Literature Review

The literature review section of a research paper provides an overview of the existing literature on the topic of study. It includes a critical analysis and synthesis of the literature, highlighting the key concepts, themes, and debates. The literature review should also demonstrate the research gap and how the current study seeks to address it.

The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. This section should provide sufficient detail for other researchers to replicate the study.

The results section presents the findings of the research, using tables, graphs, and figures to illustrate the data. The findings should be presented in a clear and concise manner, with reference to the research question and hypothesis.

The discussion section of a research paper interprets the findings and discusses their implications for the research question, the literature review, and the field of study. It should also address the limitations of the study and suggest future research directions.

The conclusion section summarizes the main findings of the study, restates the research question and hypothesis, and provides a final reflection on the significance of the research.

The references section provides a list of all the sources cited in the paper, following a specific citation style such as APA, MLA or Chicago.

How to Write Research Paper

You can write Research Paper by the following guide:

  • Choose a Topic: The first step is to select a topic that interests you and is relevant to your field of study. Brainstorm ideas and narrow down to a research question that is specific and researchable.
  • Conduct a Literature Review: The literature review helps you identify the gap in the existing research and provides a basis for your research question. It also helps you to develop a theoretical framework and research hypothesis.
  • Develop a Thesis Statement : The thesis statement is the main argument of your research paper. It should be clear, concise and specific to your research question.
  • Plan your Research: Develop a research plan that outlines the methods, data sources, and data analysis procedures. This will help you to collect and analyze data effectively.
  • Collect and Analyze Data: Collect data using various methods such as surveys, interviews, observations, or experiments. Analyze data using statistical tools or other qualitative methods.
  • Organize your Paper : Organize your paper into sections such as Introduction, Literature Review, Methods, Results, Discussion, and Conclusion. Ensure that each section is coherent and follows a logical flow.
  • Write your Paper : Start by writing the introduction, followed by the literature review, methods, results, discussion, and conclusion. Ensure that your writing is clear, concise, and follows the required formatting and citation styles.
  • Edit and Proofread your Paper: Review your paper for grammar and spelling errors, and ensure that it is well-structured and easy to read. Ask someone else to review your paper to get feedback and suggestions for improvement.
  • Cite your Sources: Ensure that you properly cite all sources used in your research paper. This is essential for giving credit to the original authors and avoiding plagiarism.

Research Paper Example

Note : The below example research paper is for illustrative purposes only and is not an actual research paper. Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

Research Paper Example sample for Students:

Title: The Impact of Social Media on Mental Health among Young Adults

Abstract: This study aims to investigate the impact of social media use on the mental health of young adults. A literature review was conducted to examine the existing research on the topic. A survey was then administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO (Fear of Missing Out) are significant predictors of mental health problems among young adults.

Introduction: Social media has become an integral part of modern life, particularly among young adults. While social media has many benefits, including increased communication and social connectivity, it has also been associated with negative outcomes, such as addiction, cyberbullying, and mental health problems. This study aims to investigate the impact of social media use on the mental health of young adults.

Literature Review: The literature review highlights the existing research on the impact of social media use on mental health. The review shows that social media use is associated with depression, anxiety, stress, and other mental health problems. The review also identifies the factors that contribute to the negative impact of social media, including social comparison, cyberbullying, and FOMO.

Methods : A survey was administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The survey included questions on social media use, mental health status (measured using the DASS-21), and perceived impact of social media on their mental health. Data were analyzed using descriptive statistics and regression analysis.

Results : The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO are significant predictors of mental health problems among young adults.

Discussion : The study’s findings suggest that social media use has a negative impact on the mental health of young adults. The study highlights the need for interventions that address the factors contributing to the negative impact of social media, such as social comparison, cyberbullying, and FOMO.

Conclusion : In conclusion, social media use has a significant impact on the mental health of young adults. The study’s findings underscore the need for interventions that promote healthy social media use and address the negative outcomes associated with social media use. Future research can explore the effectiveness of interventions aimed at reducing the negative impact of social media on mental health. Additionally, longitudinal studies can investigate the long-term effects of social media use on mental health.

Limitations : The study has some limitations, including the use of self-report measures and a cross-sectional design. The use of self-report measures may result in biased responses, and a cross-sectional design limits the ability to establish causality.

Implications: The study’s findings have implications for mental health professionals, educators, and policymakers. Mental health professionals can use the findings to develop interventions that address the negative impact of social media use on mental health. Educators can incorporate social media literacy into their curriculum to promote healthy social media use among young adults. Policymakers can use the findings to develop policies that protect young adults from the negative outcomes associated with social media use.

References :

  • Twenge, J. M., & Campbell, W. K. (2019). Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive medicine reports, 15, 100918.
  • Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., … & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in Human Behavior, 69, 1-9.
  • Van der Meer, T. G., & Verhoeven, J. W. (2017). Social media and its impact on academic performance of students. Journal of Information Technology Education: Research, 16, 383-398.

Appendix : The survey used in this study is provided below.

Social Media and Mental Health Survey

  • How often do you use social media per day?
  • Less than 30 minutes
  • 30 minutes to 1 hour
  • 1 to 2 hours
  • 2 to 4 hours
  • More than 4 hours
  • Which social media platforms do you use?
  • Others (Please specify)
  • How often do you experience the following on social media?
  • Social comparison (comparing yourself to others)
  • Cyberbullying
  • Fear of Missing Out (FOMO)
  • Have you ever experienced any of the following mental health problems in the past month?
  • Do you think social media use has a positive or negative impact on your mental health?
  • Very positive
  • Somewhat positive
  • Somewhat negative
  • Very negative
  • In your opinion, which factors contribute to the negative impact of social media on mental health?
  • Social comparison
  • In your opinion, what interventions could be effective in reducing the negative impact of social media on mental health?
  • Education on healthy social media use
  • Counseling for mental health problems caused by social media
  • Social media detox programs
  • Regulation of social media use

Thank you for your participation!

Applications of Research Paper

Research papers have several applications in various fields, including:

  • Advancing knowledge: Research papers contribute to the advancement of knowledge by generating new insights, theories, and findings that can inform future research and practice. They help to answer important questions, clarify existing knowledge, and identify areas that require further investigation.
  • Informing policy: Research papers can inform policy decisions by providing evidence-based recommendations for policymakers. They can help to identify gaps in current policies, evaluate the effectiveness of interventions, and inform the development of new policies and regulations.
  • Improving practice: Research papers can improve practice by providing evidence-based guidance for professionals in various fields, including medicine, education, business, and psychology. They can inform the development of best practices, guidelines, and standards of care that can improve outcomes for individuals and organizations.
  • Educating students : Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic writing. They help students to develop critical thinking skills, research skills, and communication skills that are essential for success in many careers.
  • Fostering collaboration: Research papers can foster collaboration among researchers, practitioners, and policymakers by providing a platform for sharing knowledge and ideas. They can facilitate interdisciplinary collaborations and partnerships that can lead to innovative solutions to complex problems.

When to Write Research Paper

Research papers are typically written when a person has completed a research project or when they have conducted a study and have obtained data or findings that they want to share with the academic or professional community. Research papers are usually written in academic settings, such as universities, but they can also be written in professional settings, such as research organizations, government agencies, or private companies.

Here are some common situations where a person might need to write a research paper:

  • For academic purposes: Students in universities and colleges are often required to write research papers as part of their coursework, particularly in the social sciences, natural sciences, and humanities. Writing research papers helps students to develop research skills, critical thinking skills, and academic writing skills.
  • For publication: Researchers often write research papers to publish their findings in academic journals or to present their work at academic conferences. Publishing research papers is an important way to disseminate research findings to the academic community and to establish oneself as an expert in a particular field.
  • To inform policy or practice : Researchers may write research papers to inform policy decisions or to improve practice in various fields. Research findings can be used to inform the development of policies, guidelines, and best practices that can improve outcomes for individuals and organizations.
  • To share new insights or ideas: Researchers may write research papers to share new insights or ideas with the academic or professional community. They may present new theories, propose new research methods, or challenge existing paradigms in their field.

Purpose of Research Paper

The purpose of a research paper is to present the results of a study or investigation in a clear, concise, and structured manner. Research papers are written to communicate new knowledge, ideas, or findings to a specific audience, such as researchers, scholars, practitioners, or policymakers. The primary purposes of a research paper are:

  • To contribute to the body of knowledge : Research papers aim to add new knowledge or insights to a particular field or discipline. They do this by reporting the results of empirical studies, reviewing and synthesizing existing literature, proposing new theories, or providing new perspectives on a topic.
  • To inform or persuade: Research papers are written to inform or persuade the reader about a particular issue, topic, or phenomenon. They present evidence and arguments to support their claims and seek to persuade the reader of the validity of their findings or recommendations.
  • To advance the field: Research papers seek to advance the field or discipline by identifying gaps in knowledge, proposing new research questions or approaches, or challenging existing assumptions or paradigms. They aim to contribute to ongoing debates and discussions within a field and to stimulate further research and inquiry.
  • To demonstrate research skills: Research papers demonstrate the author’s research skills, including their ability to design and conduct a study, collect and analyze data, and interpret and communicate findings. They also demonstrate the author’s ability to critically evaluate existing literature, synthesize information from multiple sources, and write in a clear and structured manner.

Characteristics of Research Paper

Research papers have several characteristics that distinguish them from other forms of academic or professional writing. Here are some common characteristics of research papers:

  • Evidence-based: Research papers are based on empirical evidence, which is collected through rigorous research methods such as experiments, surveys, observations, or interviews. They rely on objective data and facts to support their claims and conclusions.
  • Structured and organized: Research papers have a clear and logical structure, with sections such as introduction, literature review, methods, results, discussion, and conclusion. They are organized in a way that helps the reader to follow the argument and understand the findings.
  • Formal and objective: Research papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and analysis to support their arguments.
  • Citations and references: Research papers include citations and references to acknowledge the sources of information and ideas used in the paper. They use a specific citation style, such as APA, MLA, or Chicago, to ensure consistency and accuracy.
  • Peer-reviewed: Research papers are often peer-reviewed, which means they are evaluated by other experts in the field before they are published. Peer-review ensures that the research is of high quality, meets ethical standards, and contributes to the advancement of knowledge in the field.
  • Objective and unbiased: Research papers strive to be objective and unbiased in their presentation of the findings. They avoid personal biases or preconceptions and instead rely on the data and analysis to draw conclusions.

Advantages of Research Paper

Research papers have many advantages, both for the individual researcher and for the broader academic and professional community. Here are some advantages of research papers:

  • Contribution to knowledge: Research papers contribute to the body of knowledge in a particular field or discipline. They add new information, insights, and perspectives to existing literature and help advance the understanding of a particular phenomenon or issue.
  • Opportunity for intellectual growth: Research papers provide an opportunity for intellectual growth for the researcher. They require critical thinking, problem-solving, and creativity, which can help develop the researcher’s skills and knowledge.
  • Career advancement: Research papers can help advance the researcher’s career by demonstrating their expertise and contributions to the field. They can also lead to new research opportunities, collaborations, and funding.
  • Academic recognition: Research papers can lead to academic recognition in the form of awards, grants, or invitations to speak at conferences or events. They can also contribute to the researcher’s reputation and standing in the field.
  • Impact on policy and practice: Research papers can have a significant impact on policy and practice. They can inform policy decisions, guide practice, and lead to changes in laws, regulations, or procedures.
  • Advancement of society: Research papers can contribute to the advancement of society by addressing important issues, identifying solutions to problems, and promoting social justice and equality.

Limitations of Research Paper

Research papers also have some limitations that should be considered when interpreting their findings or implications. Here are some common limitations of research papers:

  • Limited generalizability: Research findings may not be generalizable to other populations, settings, or contexts. Studies often use specific samples or conditions that may not reflect the broader population or real-world situations.
  • Potential for bias : Research papers may be biased due to factors such as sample selection, measurement errors, or researcher biases. It is important to evaluate the quality of the research design and methods used to ensure that the findings are valid and reliable.
  • Ethical concerns: Research papers may raise ethical concerns, such as the use of vulnerable populations or invasive procedures. Researchers must adhere to ethical guidelines and obtain informed consent from participants to ensure that the research is conducted in a responsible and respectful manner.
  • Limitations of methodology: Research papers may be limited by the methodology used to collect and analyze data. For example, certain research methods may not capture the complexity or nuance of a particular phenomenon, or may not be appropriate for certain research questions.
  • Publication bias: Research papers may be subject to publication bias, where positive or significant findings are more likely to be published than negative or non-significant findings. This can skew the overall findings of a particular area of research.
  • Time and resource constraints: Research papers may be limited by time and resource constraints, which can affect the quality and scope of the research. Researchers may not have access to certain data or resources, or may be unable to conduct long-term studies due to practical limitations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Findings

Research Findings – Types Examples and Writing...

How to Publish a Research Paper

How to Publish a Research Paper – Step by Step...

Dissertation vs Thesis

Dissertation vs Thesis – Key Differences

Significance of the Study

Significance of the Study – Examples and Writing...

Research Paper Citation

How to Cite Research Paper – All Formats and...

Research Paper Formats

Research Paper Format – Types, Examples and...

  • Article Information

Data Sharing Statement

  • As Ozempic’s Popularity Soars, Here’s What to Know About Semaglutide and Weight Loss JAMA Medical News & Perspectives May 16, 2023 This Medical News article discusses chronic weight management with semaglutide, sold under the brand names Ozempic and Wegovy. Melissa Suran, PhD, MSJ
  • Patents and Regulatory Exclusivities on GLP-1 Receptor Agonists JAMA Special Communication August 15, 2023 This Special Communication used data from the US Food and Drug Administration to analyze how manufacturers of brand-name glucagon-like peptide 1 (GLP-1) receptor agonists have used patent and regulatory systems to extend periods of market exclusivity. Rasha Alhiary, PharmD; Aaron S. Kesselheim, MD, JD, MPH; Sarah Gabriele, LLM, MBE; Reed F. Beall, PhD; S. Sean Tu, JD, PhD; William B. Feldman, MD, DPhil, MPH
  • What to Know About Wegovy’s Rare but Serious Adverse Effects JAMA Medical News & Perspectives December 12, 2023 This Medical News article discusses Wegovy, Ozempic, and other GLP-1 receptor agonists used for weight management and type 2 diabetes. Kate Ruder, MSJ
  • GLP-1 Receptor Agonists and Gastrointestinal Adverse Events—Reply JAMA Comment & Response March 12, 2024 Ramin Rezaeianzadeh, BSc; Mohit Sodhi, MSc; Mahyar Etminan, PharmD, MSc
  • GLP-1 Receptor Agonists and Gastrointestinal Adverse Events JAMA Comment & Response March 12, 2024 Karine Suissa, PhD; Sara J. Cromer, MD; Elisabetta Patorno, MD, DrPH
  • GLP-1 Receptor Agonist Use and Risk of Postoperative Complications JAMA Research Letter May 21, 2024 This cohort study evaluates the risk of postoperative respiratory complications among patients with diabetes undergoing surgery who had vs those who had not a prescription fill for glucagon-like peptide 1 receptor agonists. Anjali A. Dixit, MD, MPH; Brian T. Bateman, MD, MS; Mary T. Hawn, MD, MPH; Michelle C. Odden, PhD; Eric C. Sun, MD, PhD
  • Glucagon-Like Peptide-1 Receptor Agonist Use and Risk of Gallbladder and Biliary Diseases JAMA Internal Medicine Original Investigation May 1, 2022 This systematic review and meta-analysis of 76 randomized clinical trials examines the effects of glucagon-like peptide-1 receptor agonist use on the risk of gallbladder and biliary diseases. Liyun He, MM; Jialu Wang, MM; Fan Ping, MD; Na Yang, MM; Jingyue Huang, MM; Yuxiu Li, MD; Lingling Xu, MD; Wei Li, MD; Huabing Zhang, MD
  • Cholecystitis Associated With the Use of Glucagon-Like Peptide-1 Receptor Agonists JAMA Internal Medicine Research Letter October 1, 2022 This case series identifies cases reported in the US Food and Drug Administration Adverse Event Reporting System of acute cholecystitis associated with use of glucagon-like peptide-1 receptor agonists that did not have gallbladder disease warnings in their labeling. Daniel Woronow, MD; Christine Chamberlain, PharmD; Ali Niak, MD; Mark Avigan, MDCM; Monika Houstoun, PharmD, MPH; Cindy Kortepeter, PharmD

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn

Sodhi M , Rezaeianzadeh R , Kezouh A , Etminan M. Risk of Gastrointestinal Adverse Events Associated With Glucagon-Like Peptide-1 Receptor Agonists for Weight Loss. JAMA. 2023;330(18):1795–1797. doi:10.1001/jama.2023.19574

Manage citations:

© 2024

  • Permissions

Risk of Gastrointestinal Adverse Events Associated With Glucagon-Like Peptide-1 Receptor Agonists for Weight Loss

  • 1 Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
  • 2 StatExpert Ltd, Laval, Quebec, Canada
  • 3 Department of Ophthalmology and Visual Sciences and Medicine, University of British Columbia, Vancouver, Canada
  • Medical News & Perspectives As Ozempic’s Popularity Soars, Here’s What to Know About Semaglutide and Weight Loss Melissa Suran, PhD, MSJ JAMA
  • Special Communication Patents and Regulatory Exclusivities on GLP-1 Receptor Agonists Rasha Alhiary, PharmD; Aaron S. Kesselheim, MD, JD, MPH; Sarah Gabriele, LLM, MBE; Reed F. Beall, PhD; S. Sean Tu, JD, PhD; William B. Feldman, MD, DPhil, MPH JAMA
  • Medical News & Perspectives What to Know About Wegovy’s Rare but Serious Adverse Effects Kate Ruder, MSJ JAMA
  • Comment & Response GLP-1 Receptor Agonists and Gastrointestinal Adverse Events—Reply Ramin Rezaeianzadeh, BSc; Mohit Sodhi, MSc; Mahyar Etminan, PharmD, MSc JAMA
  • Comment & Response GLP-1 Receptor Agonists and Gastrointestinal Adverse Events Karine Suissa, PhD; Sara J. Cromer, MD; Elisabetta Patorno, MD, DrPH JAMA
  • Research Letter GLP-1 Receptor Agonist Use and Risk of Postoperative Complications Anjali A. Dixit, MD, MPH; Brian T. Bateman, MD, MS; Mary T. Hawn, MD, MPH; Michelle C. Odden, PhD; Eric C. Sun, MD, PhD JAMA
  • Original Investigation Glucagon-Like Peptide-1 Receptor Agonist Use and Risk of Gallbladder and Biliary Diseases Liyun He, MM; Jialu Wang, MM; Fan Ping, MD; Na Yang, MM; Jingyue Huang, MM; Yuxiu Li, MD; Lingling Xu, MD; Wei Li, MD; Huabing Zhang, MD JAMA Internal Medicine
  • Research Letter Cholecystitis Associated With the Use of Glucagon-Like Peptide-1 Receptor Agonists Daniel Woronow, MD; Christine Chamberlain, PharmD; Ali Niak, MD; Mark Avigan, MDCM; Monika Houstoun, PharmD, MPH; Cindy Kortepeter, PharmD JAMA Internal Medicine

Glucagon-like peptide 1 (GLP-1) agonists are medications approved for treatment of diabetes that recently have also been used off label for weight loss. 1 Studies have found increased risks of gastrointestinal adverse events (biliary disease, 2 pancreatitis, 3 bowel obstruction, 4 and gastroparesis 5 ) in patients with diabetes. 2 - 5 Because such patients have higher baseline risk for gastrointestinal adverse events, risk in patients taking these drugs for other indications may differ. Randomized trials examining efficacy of GLP-1 agonists for weight loss were not designed to capture these events 2 due to small sample sizes and short follow-up. We examined gastrointestinal adverse events associated with GLP-1 agonists used for weight loss in a clinical setting.

We used a random sample of 16 million patients (2006-2020) from the PharMetrics Plus for Academics database (IQVIA), a large health claims database that captures 93% of all outpatient prescriptions and physician diagnoses in the US through the International Classification of Diseases, Ninth Revision (ICD-9) or ICD-10. In our cohort study, we included new users of semaglutide or liraglutide, 2 main GLP-1 agonists, and the active comparator bupropion-naltrexone, a weight loss agent unrelated to GLP-1 agonists. Because semaglutide was marketed for weight loss after the study period (2021), we ensured all GLP-1 agonist and bupropion-naltrexone users had an obesity code in the 90 days prior or up to 30 days after cohort entry, excluding those with a diabetes or antidiabetic drug code.

Patients were observed from first prescription of a study drug to first mutually exclusive incidence (defined as first ICD-9 or ICD-10 code) of biliary disease (including cholecystitis, cholelithiasis, and choledocholithiasis), pancreatitis (including gallstone pancreatitis), bowel obstruction, or gastroparesis (defined as use of a code or a promotility agent). They were followed up to the end of the study period (June 2020) or censored during a switch. Hazard ratios (HRs) from a Cox model were adjusted for age, sex, alcohol use, smoking, hyperlipidemia, abdominal surgery in the previous 30 days, and geographic location, which were identified as common cause variables or risk factors. 6 Two sensitivity analyses were undertaken, one excluding hyperlipidemia (because more semaglutide users had hyperlipidemia) and another including patients without diabetes regardless of having an obesity code. Due to absence of data on body mass index (BMI), the E-value was used to examine how strong unmeasured confounding would need to be to negate observed results, with E-value HRs of at least 2 indicating BMI is unlikely to change study results. Statistical significance was defined as 2-sided 95% CI that did not cross 1. Analyses were performed using SAS version 9.4. Ethics approval was obtained by the University of British Columbia’s clinical research ethics board with a waiver of informed consent.

Our cohort included 4144 liraglutide, 613 semaglutide, and 654 bupropion-naltrexone users. Incidence rates for the 4 outcomes were elevated among GLP-1 agonists compared with bupropion-naltrexone users ( Table 1 ). For example, incidence of biliary disease (per 1000 person-years) was 11.7 for semaglutide, 18.6 for liraglutide, and 12.6 for bupropion-naltrexone and 4.6, 7.9, and 1.0, respectively, for pancreatitis.

Use of GLP-1 agonists compared with bupropion-naltrexone was associated with increased risk of pancreatitis (adjusted HR, 9.09 [95% CI, 1.25-66.00]), bowel obstruction (HR, 4.22 [95% CI, 1.02-17.40]), and gastroparesis (HR, 3.67 [95% CI, 1.15-11.90) but not biliary disease (HR, 1.50 [95% CI, 0.89-2.53]). Exclusion of hyperlipidemia from the analysis did not change the results ( Table 2 ). Inclusion of GLP-1 agonists regardless of history of obesity reduced HRs and narrowed CIs but did not change the significance of the results ( Table 2 ). E-value HRs did not suggest potential confounding by BMI.

This study found that use of GLP-1 agonists for weight loss compared with use of bupropion-naltrexone was associated with increased risk of pancreatitis, gastroparesis, and bowel obstruction but not biliary disease.

Given the wide use of these drugs, these adverse events, although rare, must be considered by patients who are contemplating using the drugs for weight loss because the risk-benefit calculus for this group might differ from that of those who use them for diabetes. Limitations include that although all GLP-1 agonist users had a record for obesity without diabetes, whether GLP-1 agonists were all used for weight loss is uncertain.

Accepted for Publication: September 11, 2023.

Published Online: October 5, 2023. doi:10.1001/jama.2023.19574

Correction: This article was corrected on December 21, 2023, to update the full name of the database used.

Corresponding Author: Mahyar Etminan, PharmD, MSc, Faculty of Medicine, Departments of Ophthalmology and Visual Sciences and Medicine, The Eye Care Center, University of British Columbia, 2550 Willow St, Room 323, Vancouver, BC V5Z 3N9, Canada ( [email protected] ).

Author Contributions: Dr Etminan had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Sodhi, Rezaeianzadeh, Etminan.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Sodhi, Rezaeianzadeh, Etminan.

Critical review of the manuscript for important intellectual content: All authors.

Statistical analysis: Kezouh.

Obtained funding: Etminan.

Administrative, technical, or material support: Sodhi.

Supervision: Etminan.

Conflict of Interest Disclosures: None reported.

Funding/Support: This study was funded by internal research funds from the Department of Ophthalmology and Visual Sciences, University of British Columbia.

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

How to conduct a feasibility study: Template and examples

how to write content analysis in research

Editor’s note : This article was last updated on 27 August 2004 to bolster the step-by-step guide with more detailed instructions, more robust examples, and a downloadable, customizable template.

How To Conduct A Feasibility Study: Comprehensive Guide With Template And Examples

Opportunities are everywhere. Some opportunities are small and don’t require many resources. Others are massive and need further analysis and evaluation.

One of your key responsibilities as a product manager is to evaluate the potential success of those opportunities before investing significant money, time, and resources. A feasibility study, also known as a feasibility assessment or feasibility analysis, is a critical tool that can help product managers determine whether a product idea or opportunity is viable, feasible, and profitable.

So, what is a feasibility analysis? Why should product managers use it? And how do you conduct one?

Click here to download our customizable feasibility study template .

What is a feasibility study?

A feasibility study is a systematic analysis and evaluation of a product opportunity’s potential to succeed. It aims to determine whether a proposed opportunity is financially and technically viable, operationally feasible, and commercially profitable.

A feasibility study typically includes an assessment of a wide range of factors, including the technical requirements of the product, resources needed to develop and launch the product, the potential market gap and demand, the competitive landscape, and economic and financial viability. These factors can be broken down into different types of feasibility studies:

  • Technical feasibility — Evaluates the technical resources and expertise needed to develop the product and identifies any technical challenges that could arise
  • Financial feasibility — Analyzes the costs involved, potential revenue, and overall financial viability of the opportunity
  • Market feasibility — Assesses the demand for the product, market trends, target audience, and competitive landscape
  • Operational feasibility — Looks at the organizational structure, logistics, and day-to-day operations required to launch and sustain the product
  • Legal feasibility — Examines any legal considerations, including regulations, patents, and compliance requirements that could affect the opportunity

Based on the analysis’s findings, the product manager and their product team can decide whether to proceed with the product opportunity, modify its scope, or pursue another opportunity and solve a different problem.

Conducting a feasibility study helps PMs ensure that resources are invested in opportunities that have a high likelihood of success and align with the overall objectives and goals of the product strategy .

What are feasibility analyses used for?

Feasibility studies are particularly useful when introducing entirely new products or verticals. Product managers can use the results of a feasibility study to:

  • Assess the technical feasibility of a product opportunity — Evaluate whether the proposed product idea or opportunity can be developed with the available technology, tools, resources, and expertise
  • Determine a project’s financial viability — By analyzing the costs of development, manufacturing, and distribution, a feasibility study helps you determine whether your product is financially viable and can generate a positive return on investment (ROI)
  • Evaluate customer demand and the competitive landscape — Assessing the potential market size, target audience, and competitive landscape for the product opportunity can inform decisions about the overall product positioning, marketing strategies, and pricing
  • Identify potential risks and challenges — Identify potential obstacles or challenges that could impact the success of the identified opportunity, such as regulatory hurdles, operational and legal issues, and technical limitations
  • Refine the product concept — The insights gained from a feasibility study can help you refine the product’s concept, make necessary modifications to the scope, and ultimately create a better product that is more likely to succeed in the market and meet users’ expectations

How to conduct a feasibility study

The activities involved in conducting a feasibility study differ from one organization to another. Also, the threshold, expectations, and deliverables change from role to role. However, a general set of guidelines can help you get started.

Here are some basic steps to conduct and report a feasibility study for major product opportunities or features:

1. Clearly define the opportunity

Imagine your user base is facing a significant problem that your product doesn’t solve. This is an opportunity. Define the opportunity clearly, support it with data, talk to your stakeholders to understand the opportunity space, and use it to define the objective.

2. Define the objective and scope

Each opportunity should be coupled with a business objective and should align with your product strategy.

Determine and clearly communicate the business goals and objectives of the opportunity. Align those objectives with company leaders to make sure everyone is on the same page. Lastly, define the scope of what you plan to build.

3. Conduct market and user research

Now that you have everyone on the same page and the objective and scope of the opportunity clearly defined, gather data and insights on the target market.

Include elements like the total addressable market (TAM) , growth potential, competitors’ insights, and deep insight into users’ problems and preferences collected through techniques like interviews, surveys, observation studies, contextual inquiries, and focus groups.

4. Analyze technical feasibility

Suppose your market and user research have validated the problem you are trying to solve. The next step should be to, alongside your engineers, assess the technical resources and expertise needed to launch the product to the market.

how to write content analysis in research

Over 200k developers and product managers use LogRocket to create better digital experiences

how to write content analysis in research

Dig deeper into the proposed solution and try to comprehend the technical limitations and estimated time required for the product to be in your users’ hands. A detailed assessment might include:

  • Technical requirements — What technology stack is needed? Does your team have the necessary expertise? Are there any integration challenges?
  • Development timeline — How long will it take to develop the solution? What are the critical milestones?
  • Resource allocation — What resources (hardware, software, personnel) are required? Can existing resources be repurposed?

5. Assess financial viability

If your company has a product pricing team, work closely with them to determine the willingness to pay (WTP) and devise a monetization strategy for the new feature.

Conduct a comprehensive financial analysis, including the total cost of development, revenue streams, and the expected return on investment (ROI) based on the agreed-upon monetization strategy. Key elements to include:

  • Cost analysis — Breakdown of development, production, and operational costs
  • Revenue projections — Estimated revenue from different pricing models
  • ROI calculation — Expected return on investment and payback period

6. Evaluate potential risks

Now that you have almost a complete picture, identify the risks associated with building and launching the opportunity. Risks may include things like regulatory hurdles, technical limitations, and any operational risks.

A thorough risk assessment should cover:

  • Technical risks — Potential issues with technology, integration, or scalability.
  • Market risks — Changes in market conditions, customer preferences, or competitive landscape.
  • Operational risks — Challenges in logistics, staffing, or supply chain management.
  • Regulatory risks — Legal or compliance issues that could affect the product’s launch. For more on regulatory risks, check out this Investopedia article .

7. Decide, prepare, and share

Based on the steps above, you should end up with a comprehensive report that helps you decide whether to pursue the opportunity, modify its scope, or explore alternative options. Here’s what you should do next:

  • Prepare your report — Compile all your findings, including the feasibility analysis, market research, technical assessment, financial viability, and risk analysis into a detailed report. This document should provide a clear recommendation on whether to move forward with the project
  • Create an executive summary — Summarize the key findings and recommendations in a concise executive summary , tailored for stakeholders such as the C-suite. The executive summary should capture the essence of your report, focusing on the most critical points
  • Present to stakeholders — Share your report with stakeholders, ensuring you’re prepared to discuss the analysis and defend your recommendations. Make sure to involve key stakeholders early in the process to build buy-in and address any concerns they may have
  • Prepare for next steps — Depending on the decision, be ready to either proceed with the project, implement modifications, or pivot to another opportunity. Outline the action plan, resource requirements, and timeline for the next phase

Feasibility study template

The following feasibility study report template is designed to help you evaluate the feasibility of a product opportunity and provide a comprehensive report to inform decision-making and guide the development process.

Note: You can customize this template to fit your specific needs. Click here to download and customize this feasibility study report template .

Feasibility Study Report Template

Feasibility study example

Imagine you’re a product manager at a company that specializes in project management tools. Your team has identified a potential opportunity to expand the product offering by developing a new AI-powered feature that can automatically prioritize tasks for users based on their deadlines, workload, and importance.

A feasibility study can help you assess the viability of this opportunity. Here’s how you might approach it according to the template above:

  • Opportunity description — The opportunity lies in creating an AI-powered feature that automatically prioritizes tasks based on user-defined parameters such as deadlines, workload, and task importance. This feature is expected to enhance user productivity by helping teams focus on high-priority tasks and ensuring timely project completion
  • Problem statement — Many users of project management tools struggle with managing and prioritizing tasks effectively, leading to missed deadlines and project delays. Current solutions often require manual input or lack sophisticated algorithms to adjust priorities dynamically. The proposed AI-powered feature aims to solve this problem by automating the prioritization process, thereby reducing manual effort and improving overall project efficiency
  • Business objective — The primary objective is to increase user engagement and satisfaction by offering a feature that addresses a common pain point. The feature is also intended to increase customer retention by providing added value and driving user adoption
  • Scope — The scope includes the development of an AI algorithm capable of analyzing task parameters (e.g., deadlines, workload) and dynamically prioritizing tasks. The feature will be integrated into the existing project management tool interface, with minimal disruption to current users. Additionally, the scope covers user training and support for the new feature

Market analysis:

  • Total addressable market (TAM)  — The TAM for this feature includes all users who actively manage projects and could benefit from enhanced task prioritization
  • Competitor analysis — Competitor products such as Asana and Trello offer basic task prioritization features, but none use advanced AI algorithms. This presents a unique opportunity to differentiate this product by offering a more sophisticated solution
  • User pain points — Surveys and interviews with current users reveal that 65 percent struggle with manual task prioritization, leading to inefficiencies and missed deadlines. Users expressed a strong interest in an automated solution that could save time and improve project outcomes

Technical requirements:

  • AI algorithm development — The core of the feature is an AI algorithm that can analyze multiple factors to prioritize tasks. This requires expertise in machine learning, data processing, and AI integration
  • Integration with existing infrastructure — The feature must seamlessly integrate with the existing architecture without causing significant disruptions. This includes data compatibility, API development, and UI/UX considerations
  • Data handling and privacy — The feature will process sensitive project data, so robust data privacy and security measures must be implemented to comply with regulations like GDPR

Development timeline:

  • Phase 1 (3 months) — Research and development of the AI algorithm, including training with sample datasets
  • Phase 2 (2 months) — Integration with the platform, including UI/UX design adjustments
  • Phase 3 (1 month) — Testing, quality assurance, and bug fixing
  • Phase 4 (1 month) — User training materials and documentation preparation

Resource allocation:

  • Development team  — Two AI specialists, three backend developers, two frontend developers, one project manager
  • Hardware/software  — Additional cloud computing resources for AI processing, development tools for machine learning, testing environments

Cost analysis:

  • Development costs — Estimated at $300,000, including salaries, cloud computing resources, and software licenses
  • Marketing and launch costs  — $50,000 for promotional activities, user onboarding, and initial support
  • Operational costs  — $20,000/year for maintenance, AI model updates, and ongoing support

Revenue projections:

  • Pricing model — The AI-powered feature will be offered as part of a premium subscription tier, with an additional monthly fee of $10/user
  • User adoption — Based on user surveys, an estimated 25 percent of the current user base (10,000 users) is expected to upgrade to the premium tier within the first year
  • Projected revenue — First-year revenue is projected at $1.2 million, with an expected growth rate of 10 percent annually

ROI calculation:

  • Break-even point — The project is expected to break even within 6 months of launch
  • Five-year ROI — The feature is projected to generate a 200% ROI over five years, driven by increased subscription fees and user retention

Technical risks:

  • AI algorithm complexity — Developing an accurate and reliable AI algorithm is challenging and may require multiple iterations
  • Integration issues — There is a risk that integrating the new feature could disrupt the existing platform, leading to user dissatisfaction

Market risks:

  • User adoption — There’s a risk that users may not perceive sufficient value in the AI feature to justify the additional cost, leading to lower-than-expected adoption rates

Operational risks:

  • Support and maintenance — Maintaining the AI feature requires continuous updates and monitoring, which could strain the development and support teams

Regulatory risks:

  • Data privacy compliance — Handling sensitive project data requires strict adherence to data privacy regulations. Noncompliance could lead to legal challenges and damage to the company’s reputation
  • Decision — Based on the comprehensive analysis, the recommendation is to proceed with the development and launch of the AI-powered task prioritization feature. The potential for increased user engagement, differentiation from competitors, and positive ROI justifies the investment
  • Prepare the report — A detailed report will be compiled, including all findings from the feasibility study, cost-benefit analysis, and risk assessments. This report will be presented to key stakeholders for approval
  • Create an executive summary — A concise executive summary will be prepared for the C-suite, highlighting the key benefits, expected ROI, and strategic alignment with the company’s goals
  • Next steps — Upon approval, the project will move into the development phase, following the timeline and resource allocation outlined in the study. Continuous monitoring and iterative improvements will be made based on user feedback and performance metrics

8. Executive summary

This feasibility study evaluates the potential for developing and launching an AI-powered task prioritization feature within our project management tool. The feature is intended to automatically prioritize tasks based on deadlines, workload, and task importance, thus improving user productivity and project efficiency. The study concludes that the feature is both technically and financially viable, with a projected ROI of 200 percent over five years. The recommendation is to proceed with development, as the feature offers a significant opportunity for product differentiation and user satisfaction.

Mock feasibility study report

Now let’s see what a feasibility study report based on the above example scenario would look like ( download an example here ):

Introduction

The purpose of this feasibility study is to assess the viability of introducing an AI-powered task prioritization feature into our existing project management software. This feature aims to address the common user challenge of manually prioritizing tasks, which often leads to inefficiencies and missed deadlines. By automating this process, we expect to enhance user productivity, increase customer retention, and differentiate our product in a competitive market.

Market and user research

The total addressable market (TAM) for this AI-powered task prioritization feature includes all current and potential users of project management tools who manage tasks and projects regularly. Based on market analysis, the current user base primarily consists of mid-sized enterprises and large organizations, where task management is a critical component of daily operations.

  • Competitor analysis  — Key competitors in the project management space, such as Asana and Trello, offer basic task prioritization features. However, these solutions lack advanced AI capabilities that dynamically adjust task priorities based on real-time data. This gap in the market presents an opportunity for us to differentiate our product by offering a more sophisticated, AI-driven solution
  • User pain points — Surveys and interviews conducted with our current user base reveal that 65 percent of users experience challenges with manual task prioritization. Common issues include difficulty in maintaining focus on high-priority tasks, inefficient use of time, and the tendency to miss deadlines due to poor task management. Users expressed a strong interest in an automated solution that could alleviate these challenges, indicating a high demand for the proposed feature

Technical feasibility

  • AI algorithm development — The core component of the feature is an AI algorithm capable of analyzing multiple task parameters, such as deadlines, workload, and task importance. The development of this algorithm requires expertise in machine learning, particularly in natural language processing (NLP) and predictive analytics. Additionally, data processing capabilities will need to be enhanced to handle the increased load from real-time task prioritization
  • Integration with existing infrastructure — The AI-powered feature must be integrated into our existing project management tool with minimal disruption. This includes ensuring compatibility with current data formats, APIs, and the user interface. The integration will also require modifications to the UI/UX to accommodate the new functionality while maintaining ease of use for existing features
  • Data handling and privacy — The feature will process sensitive project data, making robust data privacy and security measures critical. Compliance with regulations such as GDPR is mandatory, and the data flow must be encrypted end-to-end to prevent unauthorized access. Additionally, user consent will be required for data processing related to the AI feature
  • Phase 1 (3 months) — Research and development of the AI algorithm, including dataset acquisition, model training, and initial testing
  • Phase 2 (2 months) — Integration with the existing platform, focusing on backend development and UI/UX adjustments
  • Phase 3 (1 month) — Extensive testing, quality assurance, and bug fixing to ensure stability and performance
  • Phase 4 (1 month) — Development of user training materials, documentation, and preparation for the product launch

Financial analysis

  • Development costs — Estimated at $300,000, covering salaries, cloud computing resources, machine learning tools, and necessary software licenses
  • Marketing and launch costs — $50,000 allocated for promotional campaigns, user onboarding programs, and initial customer support post-launch
  • Operational costs — $20,000 annually for ongoing maintenance, AI model updates, and customer support services
  • Pricing model — The AI-powered task prioritization feature will be included in a premium subscription tier, with an additional monthly fee of $10 per user
  • User adoption — Market research suggests that approximately 25% of the current user base (estimated at 10,000 users) is likely to upgrade to the premium tier within the first year
  • Projected revenue — First-year revenue is estimated at $1.2 million, with an anticipated annual growth rate of 10% as more users adopt the feature
  • Break-even point — The project is expected to reach its break-even point within 6 months of the feature’s launch
  • Five-year ROI — Over a five-year period, the feature is projected to generate a return on investment (ROI) of 200 percent, driven by steady subscription revenue and enhanced user retention

Risk assessment

  • AI algorithm complexity — Developing a sophisticated AI algorithm poses significant technical challenges, including the risk of inaccuracies in task prioritization. Multiple iterations and extensive testing will be required to refine the algorithm
  • Integration issues — Integrating the new feature into the existing platform could potentially cause compatibility issues, resulting in performance degradation or user dissatisfaction
  • User adoption — There is a possibility that users may not perceive enough value in the AI-powered feature to justify the additional cost, leading to lower-than-expected adoption rates and revenue
  • Support and maintenance — The ongoing support and maintenance required for the AI feature, including regular updates and monitoring, could place a significant burden on the development and customer support teams, potentially leading to resource constraints
  • Data privacy compliance — Handling sensitive user data for AI processing necessitates strict adherence to data privacy regulations such as GDPR. Failure to comply could result in legal repercussions and damage to the company’s reputation

Conclusion and recommendations

The feasibility study demonstrates that the proposed AI-powered task prioritization feature is both technically and financially viable. The feature addresses a significant user pain point and has the potential to differentiate the product in a competitive market. With an estimated ROI of 200 percent over five years and strong user interest, it is recommended that the project move forward into the development phase.

Next steps include finalizing the development plan, securing approval from key stakeholders, and initiating the development process according to the outlined timeline and resource allocation. Continuous monitoring and iterative improvements will be essential to ensure the feature meets user expectations and achieves the projected financial outcomes.

Overcoming stakeholder management challenges

The ultimate challenge that faces most product managers when conducting a feasibility study is managing stakeholders .

Stakeholders may interfere with your analysis, jumping to conclusions that your proposed product or feature won’t work and deeming it a waste of resources. They may even try to prioritize your backlog for you.

Here are some tips to help you deal with even the most difficult stakeholders during a feasibility study:

  • Use hard data to make your point — Never defend your opinion based on your assumptions. Always show them data and evidence based on your user research and market analysis
  • Learn to say no — You are the voice of customers, and you know their issues and how to monetize them. Don’t be afraid to say no and defend your team’s work as a product manager
  • Build stakeholder buy-in early on — Engage stakeholders from the beginning of the feasibility study process by involving them in discussions and seeking their input. This helps create a sense of ownership and ensures that their concerns and insights are considered throughout the study
  • Provide regular updates and maintain transparency — Keep stakeholders informed about the progress of the feasibility study by providing regular updates and sharing key findings. This transparency can help build trust, foster collaboration, and prevent misunderstandings or misaligned expectations
  • Leverage stakeholder expertise — Recognize and utilize the unique expertise and knowledge that stakeholders bring to the table. By involving them in specific aspects of the feasibility study where their skills and experience can add value, you can strengthen the study’s outcomes and foster a more collaborative working relationship

Final thoughts

A feasibility study is a critical tool to use right after you identify a significant opportunity. It helps you evaluate the potential success of the opportunity, analyze and identify potential challenges, gaps, and risks in the opportunity, and provides a data-driven approach in the market insights to make an informed decision.

By conducting a feasibility study, product teams can determine whether a product idea is profitable, viable, feasible, and thus worth investing resources into. It is a crucial step in the product development process and when considering investments in significant initiatives such as launching a completely new product or vertical.

For a more detailed approach and ready-to-use resources, consider using the feasibility study template provided in this post. If you’re dealing with challenging stakeholders, remember the importance of data-driven decisions, maintaining transparency, and leveraging the expertise of your team.

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

how to write content analysis in research

Stop guessing about your digital experience with LogRocket

Recent posts:.

Judy Yao Leader Spotlight

Leader Spotlight: Making the customer feel like a regular, with Judy Yao

Judy Yao talks about creating a digital experience that makes customers feel as if they were repeat, familiar customers in a physical store.

how to write content analysis in research

How can PMs benefit from generative AI

AI amplifies your potential when you use it as a co-pilot. However, don’t forget you’re the driver, not the passenger.

how to write content analysis in research

Leader Spotlight: Creating ‘magical’ customer experiences, with Karapet Gyumjibashyan

Karapet Gyumjibashyan talks about how going above and beyond to exceed customer expectations can make their experience “magical.”

The Product Discovery Process

Product discovery checklist: Aligning products with users

Use this product discovery checklist within your product team to ensure your products align with your users.

Leave a Reply Cancel reply

Content Marketing Institute

31 Great Content Writing Examples, Tips, and Tools

31 Great Content Writing Examples, Tips, and Tools

  • by Ann Gynn
  • | Published: August 21, 2024
  • | Content Creation

Great content writing must be powerful and effective to captivate your audience.

But accomplishing that with your content writing isn’t an easy task. Whether you craft words for B2B or B2C audiences, the challenges can be many.

To help, I’ve compiled web writing examples, tips, tools, and resources. The goal is to give you some insights and new tools to help address or minimize the creation stumbling blocks web and content writers face.

Let’s get to it.

1. Go for the surprise

When you write something that’s unexpected, your audience will likely stop scrolling and take a moment to learn more. In the worst cases, this approach to content writing falls under the nefarious clickbait category. But in the best cases, it can delight and engage the viewer.

Nike is always a go-to source for the best content examples. The summer of 2024 didn’t disappoint with its Winning Isn’t for Everyone campaign.

With a debut in time for the global games, Nike featured the world’s greatest athletes (well, all the great Nike-sponsored athletes) talking about they are motivated by victory and that there’s nothing wrong with wanting to win. Writing those four words — winning isn’t for everyone — fosters a strong reaction. After all, there are far more people who don’t win than do. But audiences are also likely to watch more of the videos to learn what Nike is really talking about.

As you watch the video, note the repetition of the same question (“Am I a bad person?”) followed by short, staccato-paced statements. This approach creates a lyrical story. And it paid off, earning over 2.2 million views in two weeks.

2. Don’t forget text has a starring role in video

Words appear in blog posts or descriptions of product features and benefits. But writers can also shine in  video scripts, along with set designers, actors, and filmmakers. Writers can take any topic and help make it captivating.

J.P. Morgan used animation and strong scripts to explain finance-related concepts in its Unpacked series, a finalist in the Content Marketing Awards for best video. This 4.5-minute episode covers how private companies go public:

3. Tap into trends with simple writing prompts

I’m always a fan of Dove’s #KeepBeautyReal campaigns. Most recently, it created an example of powerful writing in this simple question, “ What kind of beauty do we want AI to learn?”

Capitalizing on the AI trend and interest, Dove illustrates the difference between AI-created images for prompts about “beautiful women” and “beautiful women according to Dove’s Real Beauty ads.” In the first three months of its debut, the video with few words has earned over 100K views on Dove’s YouTube channel and garnered mainstream and industry media attention.

4. Let your audience create great writing and video examples

Creativity can emerge in many ways. Sometimes, it’s a simple starting point that reflects the times, as Dove did in its content example.

It also may lead a brand to contribute to its own pop culture trend as The Stanley did with its Quencher Cup social media campaign in 2024 . Its influencer campaign prompted these fun user-generated examples of web writing and illustration in the form of memes and TikTok videos promoting the brand’s popular drinking vessel.

Hilarious Scales created this sample that’s been seen by over 10 million viewers:

@hilarious_scaless How yall be lookin with them Stanley Cups 🤣 #fypシ #fyp #stanleycup #stanleytumbler ♬ original sound – Hilarious_scales

Fans of hockey (that sport with the other Stanley Cup) also got into the action as Instagram account Daily Facebook shared this example:

View this post on Instagram A post shared by DailyFaceoff (@dailyfaceoff)

5. Nail down your headlines

I’ve said it often: Headlines are the powerhouse of your content writing. After all, if the headline isn’t a success, the content behind it will never be read.

A 2024 study published in Science Advance conducted over 30,000 field experiments with The Washington Post and Upworthy headlines. It found that readers prefer simpler headlines (more common words and more readable writing) over complex ones. They also paid more attention to and more deeply processed the simpler headlines.

The e-book headline in this example from OptinMonster is straightforward: 50 Smart Ways to Segment Your Email List. It uses a numeral (50), a helpful adjective (smart), and a second-person pronoun (your) to speak directly to the audience, all of which elevates the article’s value in the reader’s mind.

The e-book headline in this example from OptinMonster is straightforward: 50 Smart Ways to Segment Your Email List.

Image source

6. Analyze the potential impact of your content headlines

Size up headlines with the Advanced Marketing Institute’s Headline Analyzer , which reveals an emotional marketing value score.

This headline example — 14 Ways Marketing Automation Helps B2B Companies Succeed — earns an emotional marketing value (EMV) of 37.5%. Most professional copywriters’ headlines typically have a 30% to 40% EMV score.

This headline example — 14 Ways Marketing Automation Helps B2B Companies Succeed — earns an emotional marketing value (EMV) of 37.5%.

The same headline in a similar tool, CoSchedule Blog Post Headline Analyzer , earns a score of 77 out of 100. This analysis looks at word balance, headline type, sentiment, reading grade level, clarity, and skimmability. It also identifies areas for improvement, such as the use of uncommon, emotional, and power words.

The same headline in a similar tool, CoSchedule Blog Post Headline Analyzer, earns a score of 77 out of 100.

7. Adjust title formats with this content writing tool

Speed your formatting tasks with TitleCase . The tool converts your title into various circumstances — all caps, hyphen, etc., so you don’t have to rekey or reformat.

8. Write headlines with words that resonate

BuzzSumo research consistently identifies “how-to” or guidance-focused headlines that resonate far better with audiences than any other type.

It makes sense. Audiences are seeking information that will help them in their lives, and they have a lot of content from which to choose. By writing phrases like “how to” in a headline, you tell them clearly what they’re going to get.

Get more tips from CMI’s article How To Create Headlines That Are Good for Readers and Business .

9. Focus on clarity for web content

Explaining your product or service can get cumbersome, but it shouldn’t if you want the audience to quickly understand how your company can help solve their pain points.

In this example, Zendesk succinctly highlights three results gained by the enterprise clients of its customer service platform:

  • Drive better conversations
  • Maximize agent efficiency
  • Adapt faster to change

The three- and four-word headlines are followed by short explanations (two sentences) and a link to the product’s relevant features for that category.

In this example, Zendesk highlights three results gained by the enterprise clients of its customer service platform: drive better conversations, maximize agent efficiency, and adapt faster to change.

10. Write to win over readers

How does your content inspire readers or get them to care?

Some suggestions include:

  • Focus on actionable content they could use right away.
  • Establish instant credibility and expertise so they understand why you’re the go-to resource.
  • Add value they wouldn’t see or find elsewhere.

This ad for the Content Marketing Institute newsletter works well as a sample of website content writing. It illustrates how to motivate the audience to see that the content is relevant for them. Its headline “Looking for Fresh Content Inspiration?” speaks directly to the reader. Its follow-up sentence explains in detail what the reader will get — expert advice, standout examples, and creative ideas.

The Content Marketing Institute headline, “Looking for Fresh Content Inspiration?” speaks directly to the reader. Its follow-up sentence explains in detail what the reader will get — expert advice, standout examples, and creative ideas.

11. Choose words that motivate actions

Sometimes, it’s a simple word or phrase that prompts someone to take the next step. Buffer offers a list of more than 150 words . These 19 words and phrases are examples of how to gain the audience’s trust:

  • Bestselling
  • Endorsed by
  • Money-back guarantee
  • No obligation
  • No questions asked
  • Recommended
  • Transparent
  • Try for free

In this web page example, OptinMonster opts for one of those words in its headline — How To Create a Fail-Proof Digital Marketing Plan in 5 Steps .

In this web page example, OptinMonster opts for one of those words in its headline — How To Create a Fail-Proof Digital Marketing Plan in 5 Steps.

12. Keep it brief but convey a lot

Given your audience reads on screens, your web writing usually appears in a small space. Yet, it still must reflect a strong message.

For example, this American Express Business web copy uses five words to indicate that it gets the reader’s problem — “Don’t stress over seasonal surges.” Then, it uses another five words to indicate that it has a solution — “Help you keep your business thriving.” On the right, it shows the product name that will do all that (American Express business line of credit.)

This American Express Business web copy uses five words to indicate they get the reader’s problem — “Don’t stress over seasonal surges.” Then, it uses another five words to indicate it has a solution — “Help keep your business thriving.” On the right, it shows the product name that will do all that (American Express business line of credit.)

13. Create compelling content with better words

Choosing a single word to convey the perfect sentiment makes the most of your available content space. To help, Jon Morrow of Smart Blogger offers a collection of words that can make a difference in your writing: 801+ Power Words That Make You Sound Smart . Here are 15 of them:

  • Frightening

In this headline — Firefox Hacks for Everyone: From Cozy Gamers to Minimalists and Beyond — the Mozilla blog opted for one of the power words, “hack.”

In this headline — Firefox Hacks for Everyone: From Cozy Gamers to Minimalists and Beyond — the Mozilla blog opted for one of the power words, “hack.”

 I’ll issue a caveat on this option: Power words can quickly become overused. “Hack” is coming close to saturation.

14. Length isn’t everything

I like to know content length rules and preferences. They give me guideposts for my web writing.

Google makes 30 characters available in its ad headlines, and it’s hard to go shorter than that. This simple sample — Best Enterprise CRM Platform — is 28 characters.

This simple sample — Best Enterprise CRM Platform — is 28 characters.

On social media, though, the character parameters are greater, and you could improve engagement by falling short of the upper limits.

Instagram is a perfect example of where writing content short of the 2,200-character maximum caption is a better decision. In fact, experts say the ideal length is 125 characters, which takes up the space visible before the viewer must click to read more.

Still, sometimes writing fewer than 125 characters can work well and draw attention in a crowded feed, as this sample from Grammarly shows. Its caption — “Learn actionable strategies for leveraging Gen AI to elevate your team’s productivity.” — totals just 88 characters.

Grammarly's caption — “Learn actionable strategies for leveraging Gen AI to elevate your team’s productivity.” — totals just 88 characters.

Of course, exceptions exist. If your content’s primary goal is search engine optimization, longer content is almost always best. As a website ages, it may be able to get by with shorter pieces because it’s already established authority and has more pages, inbound links , etc. However, extended content often helps generate high rankings for targeted keyword phrases and similar words.

15. Choose short words for your web writing

You don’t need to use a lot of words to get your point across. Short ones can work in your favor. Consider these common examples of better choices:

  • “Show,” not “indicate”
  • “Get rid of,” not “eliminate”
  • “Use,” not “utilize”
  • “To,” not “in order to”
  • “Help,” not “facilitate”
  • “Get,” not “obtain”

16. Use a tool to keep track of word counts

Meet your word count goals and improve your word choice with the WordCounter tool. It also helps identify keywords and their appropriate frequency of use.

17. Recognize common writing mistakes

Grammar Girl , created by Mignon Fogarty, founder of Quick and Dirty Tips, outlines some common mistakes, such as this example on the use of that vs. which in writing.

“The simple rule is to use ‘that’ with a restrictive element and ‘which’ with a non-restrictive element … The cupcakes that have sprinkles are still in the fridge. The words “that have sprinkles” restrict the kind of cupcake we’re talking about. Without those words, the meaning of the sentence would change. Without them, we’d be saying that all the cupcakes are still in the fridge, not just the ones with sprinkles.”

18. Use parallel construction

Parallel construction organizes the text and relieves your readers of expending mental energy to piece together the thoughts.

  • For example, this mish-mash list is not parallel because the sentence structures vary:
  • It could be time to look over your business software contract.
  • Consider the best products.
  • If you want the product to benefit your company, include others’ points of view.

The list is parallel because every sentence starts the same way – with a verb .

  • Review your business software contract.
  • Shop for the best products based on features, costs, and support options.
  • Ask key members of your team for their perspectives, including productivity barriers.

19. Know when to break the infinitive rule

Avoid splitting infinitives. However, sometimes you might need to bypass grammatically correct in favor of unawkward content.

Pro Writing Aid explains that split infinitives are nothing new — their use dates back to the 1300s. However, there is a time and place for them, as shown in this example from Northern Illinois University’s Effective Writing Practices Tutorial :

  • Split infinitive but easily understood: It’s hard to completely follow his reasoning.
  • No split infinitive, but awkwardly written: It’s hard to follow completely his reasoning.

20. Be conscious of pronouns

A conversational approach typically works best when you’re creating web content. Writing in the first or second person can accomplish this.

Embracing inclusivity also fosters a conversational atmosphere.

When you’re using pronouns, make sure it’s clear to what the pronoun refers. Given some people use they/them pronouns, ensuring pronoun clarity is especially important.

In those cases where the reader may be confused, explain the person’s use of the plural non-gendered pronoun in the text, for example, “Alex Alumino, who uses they/them pronouns …” Even better, just repeat their name in the sentence so there’s no need to explain and no misunderstanding.

21. Don’t overuse words

Redundancy bores. To figure out if you’re committing this sin, paste your text into the Word It Out tool. The word cloud reveals those used most often in your text.

We input a recent CMI article about user stories to create a word cloud for that content sample. It is no surprise that “user” shows up front and center, but it’s also an indicator for us to review the article to see if “user” is overused. “Katie” also shows up prominently in the word cloud as it’s the first name of the source for the article, and CMI uses first, instead of last names, on second and subsequent references. A review of the article could reveal it unnecessarily references the source too many times.

how to write content analysis in research

Similarly, WordCounter detects whether you’re using the same words too often. Use Thesaurus.com to find alternatives.

22. Try this content writing tool to replace jargon-like words

You need to speak your audience’s language, but that doesn’t mean you need to adopt the industry’s jargon. De-Jargonizer is designed to help analyze the jargon in scholarly articles, but the tool works just as well with your content writing.

In this example from a CMI article about building a social media plan , De-Jargonizer identifies four “rare” words — ebbs, inhospitable, clarifies, and actionable.

De-Jargonizer identifies four “rare” words — ebbs, inhospitable, clarifies, and actionable.

You can upload a file or paste your text to discover those rare words, aka potential jargon, in your content writing. Then, you can find more reader-friendly replacements.

23. Check your readability score

Even if readers can understand the jargon and complex sentences, they still don’t want to work hard to understand your content. To help understand if your writing is on the easier side, use a tool like Web FX’s Readability Test . It scores your content’s average reading ease and targeted readership age.

In this example, it evaluates the Fedex.com website and concludes it has a reading ease of 27.8 out of 100 and is targeted at 14- and 15-year-olds.

In this example, it evaluates the Fedex.com website and concludes it has a reading ease of 27.8 out of 100 and is targeted at 14- and 15-year-olds.

You can scroll down to see other readability scores, including Flesch Kincaid reading ease, Flesch Kincaid grade level, Gunning Fog, Smog Index, Coleman Liau, and Automated Readability Index.

The bottom of the evaluation includes the statistics about the evaluated text, including:

  • Total sentences
  • Total words
  • Complex words
  • Percent of complex words
  • Average words per sentence
  • Average syllables per word

Adjust your writing to meet the preferred readership level of your audience.

24. Evaluate sentence structure with the Hemingway App

Want more help to write content that’s easy to read? Consider tools like the Hemingway app, which provides immediate and detailed feedback on content structure, including sentence formatting. With the website version, you can replace the default text with your own.

The Hemingway app identifies potentially unnecessary adverbs, warns about passive voice, and triggers alerts to dull, complicated words.

In this web writing example from its home page, Hemingway App highlights one of the 13 sentences as very hard to read, one as hard to read, two weakener phrases, and one word with a simpler alternative.

In this web writing example from its home page, Hemingway App highlights one of the 13 sentences as very hard to read, one as hard to read, two weakener phrases, and one word with a simpler alternative.

25. Get web writing right with good grammar

Proper grammar is a necessity; you want to get everything correct to satisfy readers (and bosses). Try Grammarly .

Improve your writing with this cloud-based, AI editor. Grammarly automates grammar, spelling, and punctuation checks, often giving better, cleaner content options. The tool also alerts writers to passive voice, suggests opportunities to be concise, and assesses overall tone.

You also can save time and energy with ProWritingAid . It eliminates the need to reread to polish your content. This AI editing software offers more than grammar checks. It checks for vague wording, sentence length variation, and overuse of adverbs and passive voice. The tool also identifies complicated or run-on sentences. (“Content Writing Examples, Tips, and Resources”)

26. Read your web content in scanning mode

Here’s some sad news for content writers: Readers won’t consume every word in your content. They skip and scan a lot to see if the content is a good fit for them, and then they hope they can glean the relevant information without having to consume all the content.

As you write, think about how the text will look visually. Make it easy for readers to scan your content by including:

  • Short paragraphs
  • Bulleted lists
  • Bolded text
  • Words in color

27. Read aloud

If your content doesn’t flow as you speak it, it may not work for the reader . Pay attention to when you take too many pauses or pause in places where no comma exists. Adjust your text — add a comma or break the sentence into two.

Microsoft Word offers a read-aloud feature through its immersive reader tools, while Google Docs can use a Chrome extension to give a voice to the content .

28. Use plagiarism checkers

In recent years, advancements in artificial intelligence have prompted growth in automated plagiarism checkers. Microsoft Word embeds the feature option in its software as does Grammarly. You also can use tools dedicated to ensuring that the content writing isn’t a copycat (or being copycatted), including:

  • Unicheck – Verify the originality of work with plagiarism detection. You can spot outright copying and minor text modifications in unscrupulous submissions.
  • Copyscape – Protect your content and your reputation. Copyscape uncovers plagiarism in purchased content and detects plagiarism by others of your original work.

Of course, no plagiarism checker is 100% accurate, so before you accuse a content writer of plagiarism, triple-check the results (and add a human touch whenever appropriate).

29. Use a topic tool for writing inspiration

HubSpot’s Ideas Generator works well to get your creative content writing juices flowing. Just fill in the fields with three nouns to get some ideas.

For example, if you input the words car, truck, and SUV, HubSpot delivers these ideas along with the targeted keywords for the topic:

  • Keyword: Top truck accessories
  • Keyword: Comparing SUV models
  • Keyword: Truck bed organization ideas

For example, if you input the words car, truck, and SUV, HubSpot delivers these ideas along with the targeted keywords for the topic.

HubSpot’s topic generator also allows users to pick a title and have an outline created for that article.

You also could perform a similar exercise by writing the prompts in other generative AI tools, such as ChatGPT and Gemini .

NOTE: Always review the titles and accompanying data to ensure accuracy. In the HubSpot sample, the generator included a headline — Discover the Best SUVs for Families in 2021. Yet, it’s 2024.

30. Know SEO responsibilities in web writing

Sometimes writers create content with multiple purposes. They have the burden of blending SEO into the content . I frame it as a burden because it’s one more variable to deal with. If you have a knack for SEO and goals you can measure, it’s not a burden.

Unfortunately, you sometimes don’t know what realistic keywords to pursue. Aim too low and you use rarely searched keywords. Aspire for something too competitive, and the content won’t rank.

How are you evaluating keywords? Learn how to find your sweet spot with keyword selection (and how to appear on the first page of Google). Identify potential keywords by using tools like:

  • Moz Keyword Explorer
  • Google’s Keyword Planner
  • Keyword Tool
  • AnswerThePublic
  • Neil Patel’s Ubersuggest

31. Monitor relevant topics to get ideas for your content

With Feedly , you can stay informed about what matters most and avoid information overload. This AI assistant learns your preferences, then culls and curates content from the internet that you want and need.

Share your favorite writing tricks

What content creation and copywriting productivity tools do you favor? What do you do each day to make your writing tasks just a little easier? Please tag CMI on social media using #CMWorld.

All tools mentioned in this article were suggested by the author. If you’d like to suggest a tool, share the article on social media with a comment.

Register to attend Content Marketing World in San Diego. Use the code BLOG100 to save $100. Can't attend in person this year? Check out the Digital Pass for access to on-demand session recordings from the live event through the end of the year.

HANDPICKED RELATED CONTENT:

  • 7 Ancient Archetypes That Give Your Content Fresh Relevance
  • How To Write Faster With or Without an AI Assist
  • How To Get Branded Content Right: Examples, Ideas, and Tips
  • How To Catch Audiences With Extraordinary Hooks
  • New Study Reveals Clear Writing Tips for B2B Marketers
  • 6 Easy Things You Can Do To Improve the Content Experience for Your Audience
  • How To Turn Old Content Into a New Work of Art With an AI Assist

Cover image by Joseph Kalinowski/Content Marketing Institute

Ann Gynn

  • go to instawp.com
  • WordPress Plugins

AI Writing Plugins for Bloggers to Enhance Your Content

  • by Vikas Singhal
  • 28 Aug 2024

Imagine having an assistant who never gets tired, works around the clock, and is fueled by cutting-edge technology. Welcome to the world of AI writing plugins. These innovative tools are transforming how bloggers create content, injecting efficiency, creativity, and precision into every post.

This listicle will reveal the many advantages of using AI writing plugins, from enhanced SEO to faster content generation and beyond. If you dream of making your blog more engaging, impactful, and easy to manage, then this is a must-read.

Table of Contents

10web ai assistant – ai content writing assistant.

ai-assistant-by-10web-banner

10Web AI Assistant is a powerful AI writing plugin designed to revolutionize content creation in WordPress. This plugin leverages advanced AI algorithms to generate unique, SEO-optimized, and plagiarism-free content directly from your Gutenberg block editor and Classic Editor.

10Web AI Assistant is not just a content generator, it also functions as a content editor and optimizer, promising to transform your content creation process, boost your productivity, and enhance your site’s visibility in search engines.

The 10Web AI Assistant stands out among competitors due to its unique, WordPress-specific design. Whether you’re a blogger, content creator, or website owner, this plugin offers you a comprehensive solution to content creation and editing.

Key Features:

  • Create well-crafted, engaging content quickly and easily.
  • Automatically optimize your content for search engines to improve rankings.
  • Receive recommendations for images that complement your content.
  • Ensure your content is properly structured with AI-based formatting tools.
  • Use pre-built templates to speed up content creation.

Key Metrics:

  • Total installations: Over 20,000
  • Users: More than 8,000 active users
  • Ratings: Rated 4.6 out of 5 stars
  • WordPress Compatibility: Compatible with WordPress 5.0 and later versions

Practical Use Cases:

Content marketers and bloggers can save time by generating high-quality, SEO-optimized blog posts and articles.

10Web AI Assistant offers a range of pricing options to suit different needs:

  • Starter: $10.00 / Month (billed annually)
  • Premium: $15.00 / Month (billed annually)
  • Ultimate: $23.00 / Month (billed annually)

ai-muse-banner

AI Muse is a powerful AI-driven plugin designed to transform your content creation experience on WordPress. Supporting over 100+ AI models, including OpenAI, Google AI, and OpenRouter, AI Muse brings the future of content generation straight to your WordPress Block Editor or Site Editor. Whether you’re crafting engaging articles, generating SEO-optimized product descriptions, or creating stunning images, AI Muse offers the versatility and precision you need.

  • Easily create and draft new posts, topics, and WooCommerce products with advanced AI tools.
  • Personalize your AI interactions with custom prompts and templates to match your unique content needs.
  • Generate beautiful, AI-driven visuals that complement your content or match your creative ideas.
  • Track AI usage, costs, and performance with detailed analytics and charts.
  • Works flawlessly with WooCommerce, supporting various post types and simplifying bulk content creation.
  • Total installations: Over 15,000
  • Users: More than 7,000 active users
  • Ratings: Rated 4.7 out of 5 stars

Bloggers can use this AI writing plugin to generate blog posts, articles, and creative content with ease and precision. They can also use it to populate WooCommerce product descriptions and visuals efficiently.

AI Muse offers flexible pricing options:

  • Premium $4.17 /mo (1 Site License)
  • Plus: $26.67 /mo (10 Site License)
  • Agency : $158.34 /mo (100 Site License)

GetGenie – Ai Content Writer with Keyword Research and Competitor Analysis

getgenie-banner

The GetGenie AI Writing Plugin is powered by GPT-4o, which is the latest AI technology for content generation. It can understand and generate text based on visual information, making it a versatile tool for content creation. Whether you need to create blog posts, social media copies, or product descriptions, this plugin has got you covered.

  • Generate high-quality, SEO-optimized content in a few clicks.
  • Find the best keywords for your content with the help of advanced AI algorithms.
  • Analyze your competitors’ content and use the insights to improve your own.
  • Create unique images for your content using AI.
  • Engage with your audience using a live chatbot powered by AI.
  • Total installations : Over 10,000
  • Users : More than 5,000 active users
  • Ratings: Rated 4.5 out of 5 stars

Bloggers can use this plugin to generate content for their clients’ websites. It can save them a lot of time and effort as they won’t have to write the content manually. They can also use it to conduct keyword research and competitor analysis, which can help in improving the SEO of the websites they are working on.

The GetGenie AI Writing Plugin is available for free. However, you can upgrade to the premium version for additional features and benefits.

  • Agency Unlimited : $59.40 / Month (billed annually)
  • Pro : $29.40 / Month (billed annually)
  • Writer : $11.40 / Month (billed annually)
  • Starter : $6.00 / Month (billed annually)

WP AI CoPilot – AI content writer plugin, ChatGPT WordPress, GPT-3/4 , Ai assistance

ai-co-pilot-for-wp-banner

WP AI CoPilot is the ultimate AI-powered content creation plugin designed to transform your WordPress site. Leveraging the capabilities of GPT-3 and OpenAI, this plugin effortlessly generates high-quality articles, blog posts, product descriptions, and more. Whether you’re battling writer’s block or simply looking to streamline your content creation process, WP AI CoPilot is your go-to solution.

  • Quickly generate well-crafted content tailored to your specific topics and needs.
  • Customize and experiment with the GPT-3 model to create unique content solutions for your website.
  • Receive AI-driven recommendations and editing support to enhance the quality and readability of your writing.
  • Create content in multiple languages, expanding your reach to a global audience.
  • Automatically generate content optimized for search engines, improving your site’s visibility.
  • Fine-tune AI parameters, content length, and other settings to achieve the desired output.
  • Total installations: Over 12,000
  • Users: More than 6,000 active users

bloggers can use this AI writing plugin to effortlessly generate engaging blog posts and articles with AI assistance. They can also streamline content management and improve writing quality across your website.

WP AI CoPilot offers flexible pricing options:

  • Basic : $10/month ($5/month)
  • Standard : $15/month ($8/month)
  • Enterprise : $25/month ($12/month)

BERTHA AI. Your AI co-pilot for WordPress and Chrome

bertha-ai-free-banner

Bertha AI is an innovative AI writing plugin designed to revolutionize your content creation experience on WordPress. With state-of-the-art AI capabilities, Bertha AI can assist you in crafting compelling content, from blog posts to product descriptions, and everything in between. It’s your go-to tool for creating high-quality, SEO-optimized content with ease.

This AI writing plugin is a powerful tool that generates unique and captivating content for your website, blog posts, social media, and more. With its advanced language models, Bertha AI can create high-quality content that engages your audience and boosts your online presence.

One of the standout features of Bertha AI is its ability to create Alt Text for every image you upload. Alt Text is crucial for improving the accessibility of your website and enhancing your SEO efforts. It can also automatically generate descriptive Alt Text for your images, saving you time and ensuring that your content is optimized for search engines and users with visual impairments.

  • Write product descriptions, articles, blog posts, website copy, marketing copy, and even SEO titles and Meta Tags.
  • Generate Alt text for every image upload for better SEO ranking.
  • Works seamlessly with major page builders, SEO, Ecommerce, and LMS plugins including Divi Theme, Elementor, Yoast SEO, WooCommerce, and more.
  • Advanced AI Technology Built on top of multiple Large Language Models for unique content creation.

Since its establishment in September 2021, Bertha AI has rapidly gained popularity among WordPress users and developers. Its compatibility with WordPress and Chrome has made it a preferred choice for many. Although we are still growing, the positive feedback from our users and their continuous support motivate us to make Bertha AI even better.

Bloggers use Bertha AI for a range of applications. From creating engaging blog posts and articles to crafting persuasive product descriptions for e-commerce websites, Bertha AI is their trusted partner. It also helps in generating compelling website copy for each section of every web page and crafting marketing copy that converts.

Bertha AI offers a free version that gives full access to all models except Chat and Write for evaluation with a limit of 500 words. If you wish to upgrade, you can purchase its Pro version, priced at $96/annually or $35/ month for 3 months.

How do AI Writing Plugins Improve the Quality of Blog Content?

AI writing plugins can significantly improve the quality of blog content in several ways:

Enhanced Creativity and Ideation

AI writing tools can help bloggers overcome writer’s block by providing fresh ideas, angles, and perspectives. They can suggest topics, titles, and outlines to kickstart the writing process and inspire more creative content.

Improved Grammar and Clarity

Many AI writing plugins come equipped with advanced grammar and spelling checkers that go beyond basic proofreading. They can identify complex grammatical errors, awkward phrasing, and unclear sentences, helping bloggers communicate their ideas more effectively.

Optimized for Search Engines

AI writing assistants can analyze content for SEO factors like keyword density, readability, and meta tags. They provide recommendations to optimize blog posts for search engines, increasing the chances of ranking higher in SERPs and driving more organic traffic.

Consistent Tone and Style

By learning a blogger’s unique writing style, AI tools can help maintain a consistent tone and voice across multiple posts. This brand consistency is crucial for building a strong online presence and connecting with readers.

Reduced Editing Time

AI writing plugins can generate high-quality first drafts, significantly reducing the time and effort required for editing and revisions. Bloggers can focus on refining the content rather than starting from scratch, boosting their overall productivity.

In summary, AI writing plugins empower bloggers to create more engaging, optimized, and consistent content with greater efficiency. By leveraging these advanced tools, bloggers can elevate their content quality and achieve better results in terms of reader engagement and search engine rankings.

Final Words

AI writing plugins present a golden opportunity for bloggers to streamline their content creation process. The benefits are multifold, from generating unique, SEO-optimized content to enhancing productivity and improving audience engagement.

Having a grasp of these options allows bloggers to simplify their updates process and meet their goals more effectively, whether it’s growing their audience, boosting engagement, or establishing themselves as an authority in their field.

Don’t let the opportunity pass you by. Harness the power of AI writing plugins and transform your content creation process today. Take the first step towards optimizing your content strategy, increasing your productivity, and achieving your business goals. Remember, the right tools are just a click away!

  • instawp , plugins , WordPress Plugins

Meet the Author

Picture of Vikas Singhal

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

You might also like

how to write content analysis in research

  • 29 Aug 2024

How to Edit Site Content, Layout, and Code (PHP, HTML, CSS) using a WordPress Editor

  • Vikas Singhal

how to write content analysis in research

InstaWP Announces the World’s First WordPress Hackathon with Over $10,000 in Prizes.

Best Practices for Managing Large-Scale WordPress Sites Efficiently

Best Practices for Managing Large-Scale WordPress Sites Efficiently

Ready to build wordpress sites.

InstaWP is an all-one-in developers toolbox where you can get started  on WordPress in an instant, build the site and host it anywhere.

how to write content analysis in research

Compare InstaWP

© 2024 InstaWP Inc. All Rights Reserved. Legal information.   |   Proudly hosted at InstaWP Live

Request demo

Wondering how to integrate InstaWP with your current workflow? Ask us for a demo.

Contact Sales

Reach out to us to explore how InstaWP can benefit your business.

IMAGES

  1. 10 Content Analysis Examples (2024)

    how to write content analysis in research

  2. Content Analysis For Research

    how to write content analysis in research

  3. What it is Content Analysis and How Can you Use it in Research

    how to write content analysis in research

  4. content analysis Diagram

    how to write content analysis in research

  5. (PDF) Content analysis

    how to write content analysis in research

  6. Content Analysis For Research Step By Step Guide With Examples Images

    how to write content analysis in research

VIDEO

  1. Definitions / Levels of Measurement . 3/10 . Quantitative Analysis . 21st Sep. 2020 . #AE-QN/QL-201

  2. Content Analysis || Research Methodology || Dr.vivek pragpura || sociology with vivek ||

  3. Content Analysis Method || Content Analysis Method in hindi || Content Analysis Research Method

  4. How to do content analysis in Excel and the concept of content analysis ( Amharic tutorial)

  5. 68 Content Analysis Research Method for Consumer Behavior and Marketing

  6. Frequencies / Measures of Center . 4/10 . Quantitative Analysis . 21st Sep. 2020 . #AE-QN/QL-201

COMMENTS

  1. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  2. Content Analysis

    Step 1: Select the content you will analyse. Based on your research question, choose the texts that you will analyse. You need to decide: The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)

  3. Content Analysis

    Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the findings.

  4. A hands-on guide to doing content analysis

    Content analysis, as in all qualitative analysis, is a reflective process. There is no "step 1, 2, 3, done!" linear progression in the analysis. ... During the analysis process, it can be advantageous to write down your research aim and questions on a sheet of paper that you keep nearby as you work. Frequently referring to your aim can help ...

  5. Content Analysis Method and Examples

    Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts.

  6. How to do a content analysis [7 steps]

    The advantages and disadvantages of content analysis. A step-by-step guide to conducting a content analysis. Step 1: Develop your research questions. Step 2: Choose the content you'll analyze. Step 3: Identify your biases. Step 4: Define the units and categories of coding. Step 5: Develop a coding scheme.

  7. Chapter 17. Content Analysis

    Content analyses often include counting as part of the interpretive (qualitative) process. In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, "strategies and tactics" is a bit of a stretch here.

  8. Qualitative Content Analysis 101 (+ Examples)

    Content analysis is a qualitative analysis method that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants - this is called unobtrusive research. In other words, with content ...

  9. Sage Research Methods

    The Fourth Edition has been completely revised to offer readers the most current techniques and research on content analysis, including new information on reliability and social media. Readers will also gain practical advice and experience for teaching academic and commercial researchers how to conduct content analysis. Available with Perusall ...

  10. Content Analysis

    Abstract. In this chapter, the focus is on ways in which content analysis can be used to investigate and describe interview and textual data. The chapter opens with a contextualization of the method and then proceeds to an examination of the role of content analysis in relation to both quantitative and qualitative modes of social research.

  11. PDF Qualitative Analysis of Content

    Step 1: Prepare the Data. Qualitative content analysis can be used to analyze various types of data, but generally the data need to be transformed into written text before analysis can start. If the data come from existing texts, the choice of the content must be justified by what you want to know (Patton, 2002).

  12. The Practical Guide to Qualitative Content Analysis

    Qualitative content analysis is a research method used to analyze and interpret the content of textual data, such as written documents, interview transcripts, or other forms of communication. It provides a systematic way to identify patterns, concepts, and larger themes within the data to gain insight into the meaning and context of the content.

  13. How to plan and perform a qualitative study using content analysis

    When performing a qualitative content analysis, the investigator must consider the data collected from a neutral perspective and consider their objectivity. However, the researcher has a choice between the manifest and the latent level, and the depth of the analysis will depend on how the data are collected.

  14. Guide: Using Content Analysis

    Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer (s), the audience, and even the culture and time ...

  15. Reflexive Content Analysis: An Approach to Qualitative Data Analysis

    Firstly, it has become difficult to understand and apply qualitative content analysis procedures (Cho & Lee, 2014; Vears & Gillam, 2022).Indeed, research papers often cite competing qualitative content analysis approaches in their analytical procedures without reconciling apparent discrepancies.

  16. How to Conduct Content Analysis

    Step 8: Draw Conclusions and Report Findings. Based on your analysis, draw conclusions and report your findings. Clearly explain the results of your content analysis and their connection to your research questions or objectives. Use evidence from your coded data to support your conclusions.

  17. Qualitative Content Analysis 101: The What, Why & How (With ...

    Learn about content analysis in qualitative research. We explain what it is, the strengths and weaknesses of content analysis, and when to use it. This video...

  18. What is Content Analysis

    Content analysis: Offers both qualitative and quantitative analysis of the communication. Provides an in-depth understanding of the content by making it precise. Enables us to understand the context and perception of the speaker. Provides insight into complex models of human thoughts and language use.

  19. Structuring a qualitative findings section

    Open MenuClose Menu. Reporting the findings from a qualitative study in a way that is interesting, meaningful, and trustworthy can be a struggle. Those new to qualitative research often find themselves trying to quantify everything to make it seem more "rigorous," or asking themselves, "Do I really need this much data t.

  20. (PDF) Content Analysis

    Content analysis is the study of recorded human. communications such as dairy entries, books, newspaper, video s, text messages, tweets, Facebook updates etc. Being the scientific study of the ...

  21. How to Do Thematic Analysis

    Like all academic texts, writing up a thematic analysis requires an introduction to establish our research question, aims and approach. We should also include a methodology section, describing how we collected the data (e.g. through semi-structured interviews or open-ended survey questions ) and explaining how we conducted the thematic analysis ...

  22. Research Paper

    Definition: Research Paper is a written document that presents the author's original research, analysis, and interpretation of a specific topic or issue. It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new ...

  23. PDF Harvard WrITINg ProJeCT BrIeF gUIde SerIeS A Brief Guide to the

    reader to trust it (e.g. in textual analysis, it often helps to find one or two key or representative passages to quote and focus on); and if summarized, it needs to be summarized accurately and fairly. 5. Analysis: the work of breaking down, interpreting, and commenting upon the data, of saying what can be

  24. A hands-on guide to doing content analysis

    A common starting point for qualitative content analysis is often transcribed interview texts. The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes ...

  25. GLP-1 Agonists and Gastrointestinal Adverse Events

    Author Contributions: Dr Etminan had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Sodhi, Rezaeianzadeh, Etminan. Acquisition, analysis, or interpretation of data: All authors. Drafting of the manuscript: Sodhi, Rezaeianzadeh, Etminan.

  26. How to conduct a feasibility study: Template and examples

    Based on market analysis, the current user base primarily consists of mid-sized enterprises and large organizations, where task management is a critical component of daily operations. Competitor analysis — Key competitors in the project management space, such as Asana and Trello, offer basic task prioritization features. However, these ...

  27. Content Writing Examples, Tips, and Resources

    Great content writing must be powerful and effective to captivate your audience. But accomplishing that with your content writing isn't an easy task. Whether you craft words for B2B or B2C audiences, the challenges can be many. To help, I've compiled web writing examples, tips, tools, and resources.

  28. AI Writing Plugins for Bloggers to Enhance Your Content

    It can save them a lot of time and effort as they won't have to write the content manually. They can also use it to conduct keyword research and competitor analysis, which can help in improving the SEO of the websites they are working on. Price: The GetGenie AI Writing Plugin is available for free. However, you can upgrade to the premium ...

  29. SPRINT Call for Proposals 2/2024

    3.4. FAPESP may accept proposals submitted to this call linked to research proposals that are still undergoing merit review. However, if the research proposal is still under analysis or has not been approved by the time the proposals in this Call are selected, the SPRINT proposal will be cancelled. 4. Eligible fundable participants 4.1.