Internet Trolling, Its Impact and Suggested Solutions Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Perpetrators and victims of internet trolling, the effects of trolling on individual people and the society, suggested solutions to trolling activities, reference list.

Growing access to internet resources has brought not only the advantages of getting the necessary information or services. Along with receiving benefits, people may fall victim to serious dangers presented by hackers, trolls, phreakers, and other deviant and antisocial groups. Internet trolling is one of the most frequently used jargon words of our century. This activity involves causing harm to internet users with the aim of bringing enjoyment for the person doing this harm (an internet troll) or entertaining the audience this person wants to impress (Bishop 2013). The problem of internet trolling is getting bigger every day as trolls acquire more possibilities to spread their unkind word through the world wide web.

The vast extent of internet trolling is partially explained by the diversity of the topics in which trolls are engaged. These may include political, sexual, racial, gender, and many other spheres where trolls exercise their provocative activity. Trolling fluctuates between dubiously distasteful and almost illegal. Trolls provoke their victims with illusively sexist or racist words, they post outrageous images with the aim of wrecking the discussion and fill the conversation threads with preposterous misinterpretations of other users’ opinions (Phillips 2015). The scope of trolling is unbelievably large as the trolls are apt to penetrate any subject one can possibly imagine. The trolls do not have any personal feelings towards what they are writing. Thus, they can write about anything. What gives them more freedom is that they are legally protected by the postulates of freedom of speech proclaimed in the First Amendment (Phillips 2015). Thus, the extent of trolling activity is rather wide, and it is getting bigger every day.

Depending on what kind of activity brings the most fun, trolls are divided into a number of categories. There are trolls who love getting people enraged (“rabid flamers”), who luxuriate in correcting others’ mistakes (“priggish grammar trolls”), “crybabies” who threaten never to come back when someone hurts their feelings but always return the “never-give-up, never-surrender” type who always has to be right, and the “retroactive stalkers” who will not calm down until they have found something embarrassing in your history which they can bring up every time you post a thing.

There are also such types as “lame teenager,” “self-feeding troll,” “bored hater” and “Nellie McNeggerson” who enjoy complaining and contradicting others (Grande 2010). “Sharing troll” is the one who discloses your personal information if he/she is angry with you, “profane screamer” asserts his/her opinion by writing in capitals, “white knight” defends someone even if nobody asks him/her for that, “expert” behaves as if he/she knows everything about everything. Other popular varieties of trolls are “spoiler” who reveals the film endings and sports match results, “fraud” who steals money or personal secrets, and “flooder” who posts the same thing repeatedly. Finally, there are “liar” and “stalker” types who both try to seduce others. The difference is that the “stalker” can actually be harmful while the “liar” is usually inoffensive (Grande 2010).

Psychological premises for trolling are concerned with invisibility, anonymity, and the absence of real-time communication (Stein 2016). Trolls emphasize that unlike clearly abusive messages posed by flamers, their messages are merely provocative and meant to bring fun (Bishop 2014). However, revealing people’s personal data or causing conflicts is not considered funny by other internet users.

The popular victims of trolling activists are the women defending the feminist movement, celebrities, visitors of tourism and hospitality websites, and even people who passed away – RIP pages frequently become the objects of trolling (Lumsden & Morgan 2016). Anonymity allows the trolls to say whatever they want to reach their aim: provoke a fight among internet users. The societal aspect of such provocations interests trolls most of all.

Female victims claim that the majority of offensive messages on the internet are gender-related (Lumsden & Morgan 2016). The women say that they have to choose their language very cautiously not to initiate any incidents, but the trolls do their best to provoke fights in feeds to various female activist blogs and articles. Sexist trolling is almost as frequent as the racist one, but the methods of fighting sexism are much less powerful than the ways of dealing with racism (Lumsden & Morgan 2016).

Celebrities are among the most popular trolling victims because they have a lot of admirers and followers. Thus, trolls are happy to cause fights among the fans of popular culture idols about whom they may actually not care at all (Lumsden & Morgan 2016). However, unlike other people, celebrities may even gain an advantage from trolling. The more discussed their lives are the more fame and, consequently, the income they obtain.

Trolling of the tourism and hospitality industry is performed via social media sites (Mkono 2015). The aim of trolls, in this case, is to undermine the reputation of tourist companies by leaving negative fake reviews about the facilities and services. So far, this variety of trolling is difficult to fight because the messages at websites such as TripAdvisor are anonymous, and it is impossible to track a huge number of trolls leaving comments there (Mkono 2015). If the anonymity option is changed, the harm done to tourism companies will be eliminated.

Basically, trolls do not take what they do seriously, so they never lose. They take a decision of whether or not to give any weight to their words. Thus, the problem of immortality is not in what the trolls utter but in the absence of implication of their utterances for themselves (Phillips 2015). However, that is so only from their own point of view. What concerns others, trolling is considered an activity having a disastrous impact on society in general and separate people in particular. Trolls are able to cause fights between individuals and between whole groups of people. The more conflict they manage to provoke, the happier they will be. It seems that trolls are not concerned with the future of mankind and do not feel responsible for provoking dangerous conflicts at all.

Although trolls can influence people of all ages and social statuses, the most dramatic effect is performed on teenagers who are vulnerable and susceptible to any kind of attack – real or virtual. There have been cases of teens committing suicide after they became victims of trolling. When trolls go too far and reveal people’s personal information, such as photos or personal history details, they do not realize how destructive their activity may be. Teenagers cannot stand being blackmailed and trolled, and there have been many sad stories connected with such activities (Millet 2014). Parents are concerned about trolling as it undermines the teenagers’ self-esteem and confidence. However, while teens may be the most vulnerable social group, trolls can impact anyone. People constantly suffer from negative comments and get frustrated because of unnecessary posts that take away their time. Psychologists recommend to develop such defensive reactions and ignoring trolls’ comments and getting distracted from negative information. Still, not everyone is able to restrict his/her vision to only positive things and not be touched by the antagonistic messages sent by trolls.

Not only individual people can be bullied by trolls and their methods. Society becomes injured as well. The greatest impact produced on society by the trolls is that they succeed in dividing the society into opposite sides (Rani 2016). Social media have always been a way of allowing individuals to share their viewpoints and find those with similar opinions. With the advent of trolling, various hostile activities were awakened. Instead of considering it bad to offend someone, people have got so used to insulting and disrespectful behavior that they actually consider it a part of normal life (Rani 2016). Moreover, some individuals begin to feel influential when they troll others and soon cannot stop performing their adverse actions. In such a way, trolling gradually builds up a polarised society. The threat of such societal changes is that people become less humane and friendly and tend to be cruel and hostile.

Trolling is a fast-growing risk for individual people and whole societies which has a tendency to develop and find new intricate ways of expression every day. To eliminate the negative impact of internet trolls, people should establish effective interventions at various levels.

Since trolls have a lot of techniques at their disposal, the fight against their destructive effects requires a versatile approach. In order to solve the problem of trolling, it is necessary to approach it at an individual level, at the level of online media corporations, and at a legislative level. To deal with trolls at a personal degree, there is a golden rule for every internet user: “do not feed the trolls” (Grande 2010). This advice means that one should not get provoked by the trolls’ messages and insults. It may not be an easy thing to do, but the outcomes will be rather positive: no stress or spoilt mood and no wasted time. However, a rational decision to disregard trolls may not be sufficient. Frequently, more discreet interventions are needed (Sanfilippo, Yang & Fichman 2017). In the case of deviant trolls, such methods as cutting trolls off or exposing their identity may be employed. Additionally, internet users consider ignoring trolls not only a great reaction measure but also as a useful preventive effort.

What concerns the steps taken by online media corporations, their fight against trolling requires much more time and resources. First of all, they need to check all posts to see whether any of them are written by trolls. Then, they need to create barriers for such posts and block unwanted kinds of messages. These activities require more people working on websites and more money to pay for their salaries. Insufficiency of such resources is basically the main reason why there is a big problem with trolling over the internet. Another serious issue is the anonymity granted to internet users which disables the online media corporations to control their visitors. This problem is what connects the media companies with the legislative system.

The government’s regulation of trolling is quite limited by the First Amendment which guarantees every citizen freedom of speech (Phillips 2015). However, with the increasing damaging impact of internet trolling, governments of many countries are developing strategies for confronting trolls and preventing their adverse impact on internet users. For instance, the UK adopted the Communications Act 2003 which regulates mobile phone calls, emails, text messages, and internet messages (Lumsden & Morgan 2016). Section 127 of this Act proclaims that sending messages which are offensive or indecent is an offense that occurs notwithstanding the fact of receiving or not receiving the message. With the growing number of offensive cases provoked by trolling, in 2012, the UK government initiated an amendment to the Defamation Act which would enable the government to track the identities of internet users.

At the same time, internet providers would not be punished for their users’ publications on the condition that they share information about their users (Lumsden & Morgan 2016). In a debate in a House of Commons initiated in 2012, some Members of Parliament emphasized that changing regulations regarding anonymity would intimidate the freedom of speech. Furthermore, law adjustments are not enough when it comes to dealing with internet deviations problems. Apart from legislative changes, alterations in people’s cultural lifestyles are also necessary (Lumsden & Morgan 2016). Cultural transformations are especially important in the view of the modern “sexualized” behavior of celebrities widely illustrated in different media modes such as newspapers, reality TV-shows, and magazines.

Therefore, while it is impossible to implement new laws instantly, there are things that any sober-minded person can do to avert the adverse outcomes of communications with trolls. The basic rule is not to provoke any reaction on their part and to stay away from their negative posts.

The problem of trolling as a fast-growing issue of modern society. With so many people going online, more and more individuals get involved in troll messages every minute and are psychologically damaged by their deviant conduct. Trolls penetrate every part of the internet activity. They leave their unnecessary comments, provoke fights, or simply depress others, which makes an adverse impact on internet users. Trolling occurs in various spheres and divergent types of media sources. Trolls may leave false negative comments of recommendation which deceive people, or they may foster the users and make their lives unbearable. With the advancement of technologies, there is an urgent need to develop people’s security from trolls.

Possible solutions to the problem of trolling are possible at several levels: personal, corporate, and governmental. The most beneficial resolution would be to eliminate online anonymity at a governmental level. However, due to the existence of many laws and regulations defending privacy and freedom of speech, it is quite complicated to gain any results in this sphere in a short time. What can and should be made by every internet user is being cautious of one’s behavior online. People should be careful not to provoke the trolls. However, it is often the case that they do not even need to be provoked. On such occasions, the best solution is to ignore trolls at a personal communication level. Online media corporations can contribute to solving the problem by implementing stricter rules on online chats and forums and by blocking the trolls. By taking small steps consistently, it is possible to develop a troll-free internet environment where every user can count on having a good time without the need to get distracted and frustrated. Finally, apart from thinking of the ways to change trolls we should come up with the ideas of how to change our society so that there are fewer provocations and more pleasant things to discuss.

Bishop, J 2013, Examining the concepts, issues, and implications of internet trolling , Information Science Reference, Hershey.

Bishop, J 2014, ‘Digital teens and the ‘antisocial network’: prevalence of troublesome online youth groups and internet trolling in Great Britain’, International Journal of E-Politics, vol. 5, no. 3, pp. 1-15.

Grande, T L 2010, ‘The eighteen types of internet trolls’, Smosh . Web.

Lumsden, K & Morgan, H M 2016, ‘‘Fraping’, ‘trolling’ and ‘rinsing’: social networking, feminist thought and the construction of young women as victims or villains’, Clinical and Experimental Optometry , vol. 99, no. 2, pp. 1-17.

Millet, W 2014, ‘The dangerous consequences of cyberbullying and trolling’, The Circular . Web.

Mkono, M 2015, ‘‘Troll alert!’: provocation and harassment in tourism and hospitality social media’, Current Issues in Tourism , vol. 1, pp. 1-14.

Phillips, W 2015, This is why we can’t have nice things: mapping the relationships between online trolling and mainstream culture , Massachusetts Institute of Technology Press, Massachusetts.

Rani, R 2016, ‘How abusive trolls are ruining an otherwise great tool – social media’ , Youth Ki Avaaz . Web.

Sanfilippo, M A, Yang, S & Fichman, P 2017, ‘Managing online trolling: from deviant to social and political trolls’, Proceedings of the 50th Hawaii International Conference on System Sciences , pp. 1802-1811.

Stein, J 2016, ‘How trolls are ruining the internet’ , Time . Web.

  • Internet Issues: Teens, Social Media and Privacy
  • Internet in American Politics, Society, Economics
  • Using Passive and Active Voice
  • Free Speech and the Internet
  • Software Patents: Intellectual Property Rights in the Tech Industry
  • Online Media Publishing Options and Applications
  • Internet Monopolies: Everybody Wants to Rule the World
  • Wireless Networks' Historical Development
  • Mobile Internet and Its Economics
  • Recommender Systems of Internet
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, September 15). Internet Trolling, Its Impact and Suggested Solutions. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/

"Internet Trolling, Its Impact and Suggested Solutions." IvyPanda , 15 Sept. 2020, ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

IvyPanda . (2020) 'Internet Trolling, Its Impact and Suggested Solutions'. 15 September.

IvyPanda . 2020. "Internet Trolling, Its Impact and Suggested Solutions." September 15, 2020. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

1. IvyPanda . "Internet Trolling, Its Impact and Suggested Solutions." September 15, 2020. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

Bibliography

IvyPanda . "Internet Trolling, Its Impact and Suggested Solutions." September 15, 2020. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

How Trolls Are Ruining the Internet

Troll Culture of Hate Time Magazine Cover

T his story is not a good idea. Not for society and certainly not for me. Because what trolls feed on is attention. And this little bit–these several thousand words–is like leaving bears a pan of baklava.

It would be smarter to be cautious, because the Internet’s personality has changed. Once it was a geek with lofty ideals about the free flow of information. Now, if you need help improving your upload speeds the web is eager to help with technical details, but if you tell it you’re struggling with depression it will try to goad you into killing yourself. Psychologists call this the online disinhibition effect, in which factors like anonymity, invisibility, a lack of authority and not communicating in real time strip away the mores society spent millennia building. And it’s seeping from our smartphones into every aspect of our lives.

The people who relish this online freedom are called trolls, a term that originally came from a fishing method online thieves use to find victims. It quickly morphed to refer to the monsters who hide in darkness and threaten people. Internet trolls have a manifesto of sorts, which states they are doing it for the “lulz,” or laughs. What trolls do for the lulz ranges from clever pranks to harassment to violent threats. There’s also doxxing–publishing personal data, such as Social Security numbers and bank accounts–and swatting, calling in an emergency to a victim’s house so the SWAT team busts in. When victims do not experience lulz, trolls tell them they have no sense of humor. Trolls are turning social media and comment boards into a giant locker room in a teen movie, with towel-snapping racial epithets and misogyny.

For a limited time, TIME is giving all readers special access to subscriber-only stories. For complete access, we encourage you to become a subscriber. Click here.

They’ve been steadily upping their game. In 2011, trolls descended on Facebook memorial pages of recently deceased users to mock their deaths. In 2012, after feminist Anita Sarkeesian started a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games, she received bomb threats at speaking engagements, doxxing threats, rape threats and an unwanted starring role in a video game called Beat Up Anita Sarkeesian. In June of this year, Jonathan Weisman, the deputy Washington editor of the New York Times, quit Twitter, on which he had nearly 35,000 followers, after a barrage of anti-Semitic messages. At the end of July, feminist writer Jessica Valenti said she was leaving social media after receiving a rape threat against her daughter, who is 5 years old.

A Pew Research Center survey published two years ago found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they’d been stalked online. This is exactly what trolls want. A 2014 study published in the psychology journal Personality and Individual Differences found that the approximately 5% of Internet users who self-identified as trolls scored extremely high in the dark tetrad of personality traits: narcissism, psychopathy, Machiavellianism and, especially, sadism.

But maybe that’s just people who call themselves trolls. And maybe they do only a small percentage of the actual trolling. “Trolls are portrayed as aberrational and antithetical to how normal people converse with each other. And that could not be further from the truth,” says Whitney Phillips, a literature professor at Mercer University and the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture. “These are mostly normal people who do things that seem fun at the time that have huge implications. You want to say this is the bad guys, but it’s a problem of us.”

A lot of people enjoy the kind of trolling that illuminates the gullibility of the powerful and their willingness to respond. One of the best is Congressman Steve Smith, a Tea Party Republican representing Georgia’s 15th District, which doesn’t exist. For nearly three years Smith has spewed over-the-top conservative blather on Twitter, luring Senator Claire McCaskill, Christiane Amanpour and Rosie O’Donnell into arguments. Surprisingly, the guy behind the GOP-mocking prank, Jeffrey Marty, isn’t a liberal but a Donald Trump supporter angry at the Republican elite, furious at Hillary Clinton and unhappy with Black Lives Matter. A 40-year-old dad and lawyer who lives outside Tampa, he says he has become addicted to the attention. “I was totally ruined when I started this. My ex-wife and I had just separated. She decided to start a new, more exciting life without me,” he says. Then his best friend, who he used to do pranks with as a kid, killed himself. Now he’s got an illness that’s keeping him home.

Marty says his trolling has been empowering. “Let’s say I wrote a letter to the New York Times saying I didn’t like your article about Trump. They throw it in the shredder. On Twitter I communicate directly with the writers. It’s a breakdown of all the institutions,” he says. “I really do think this stuff matters in the election. I have 1.5 million views of my tweets every 28 days. It’s a much bigger audience than I would have gotten if I called people up and said, ‘Did you ever consider Trump for President?'”

Trolling is, overtly, a political fight. Liberals do indeed troll–sex-advice columnist Dan Savage used his followers to make Googling former Pennsylvania Senator Rick Santorum’s last name a blunt lesson in the hygienic challenges of anal sex; the hunter who killed Cecil the lion got it really bad.

But trolling has become the main tool of the alt-right, an Internet-grown reactionary movement that works for men’s rights and against immigration and may have used the computer from Weird Science to fabricate Donald Trump. Not only does Trump share their attitudes, but he’s got mad trolling skills: he doxxed Republican primary opponent Senator Lindsey Graham by giving out his cell-phone number on TV and indirectly got his Twitter followers to attack GOP political strategist Cheri Jacobus so severely that her lawyers sent him a cease-and-desist order.

The alt-right’s favorite insult is to call men who don’t hate feminism “cucks,” as in “cuckold.” Republicans who don’t like Trump are “cuckservatives.” Men who don’t see how feminists are secretly controlling them haven’t “taken the red pill,” a reference to the truth-revealing drug in The Matrix. They derisively call their adversaries “social-justice warriors” and believe that liberal interest groups purposely exploit their weakness to gain pity, which allows them to control the levers of power. Trolling is the alt-right’s version of political activism, and its ranks view any attempt to take it away as a denial of democracy.

In this new culture war, the battle isn’t just over homosexuality, abortion, rap lyrics, drugs or how to greet people at Christmastime. It’s expanded to anything and everything: video games, clothing ads, even remaking a mediocre comedy from the 1980s. In July, trolls who had long been furious that the 2016 reboot of Ghostbusters starred four women instead of men harassed the film’s black co-star Leslie Jones so badly on Twitter with racist and sexist threats–including a widely copied photo of her at the film’s premiere that someone splattered semen on–that she considered quitting the service. “I was in my apartment by myself, and I felt trapped,” Jones says. “When you’re reading all these gay and racial slurs, it was like, I can’t fight y’all. I didn’t know what to do. Do you call the police? Then they got my email, and they started sending me threats that they were going to cut off my head and stuff they do to ‘N words.’ It’s not done to express an opinion, it’s done to scare you.”

Because of Jones’ harassment, alt-right leader Milo Yiannopoulos was permanently banned from Twitter. (He is also an editor at Breitbart News, the conservative website whose executive chairman, Stephen Bannon, was hired Aug. 17 to run the Trump campaign.) The service said Yiannopoulos, a critic of the new Ghostbusters who called Jones a “black dude” in a tweet, marshaled many of his more than 300,000 followers to harass her. He not only denies this but says being responsible for your fans is a ridiculous standard. He also thinks Jones is faking hurt for political purposes. “She is one of the stars of a Hollywood blockbuster,” he says. “It takes a certain personality to get there. It’s a politically aware, highly intelligent star using this to get ahead. I think it’s very sad that feminism has turned very successful women into professional victims.”

A gay, 31-year-old Brit with frosted hair, Yiannopoulos has been speaking at college campuses on his Dangerous Faggot tour. He says trolling is a direct response to being told by the left what not to say and what kinds of video games not to play. “Human nature has a need for mischief. We want to thumb our nose at authority and be individuals,” he says. “Trump might not win this election. I might not turn into the media figure I want to. But the space we’re making for others to be bolder in their speech is some of the most important work being done today. The trolls are the only people telling the truth.”

The alt-right was galvanized by Gamergate, a 2014 controversy in which trolls tried to drive critics of misogyny in video games away from their virtual man cave. “In the mid-2000s, Internet culture felt very separate from pop culture,” says Katie Notopoulos, who reports on the web as an editor at BuzzFeed and co-host of the Internet Explorer podcast. “This small group of people are trying to stand their ground that the Internet is dark and scary, and they’re trying to scare people off. There’s such a culture of viciously making fun of each other on their message boards that they have this very thick skin. They’re all trained up.”

Andrew Auernheimer, who calls himself Weev online, is probably the biggest troll in history. He served just over a year in prison for identity fraud and conspiracy. When he was released in 2014, he left the U.S., mostly bouncing around Eastern Europe and the Middle East. Since then he has worked to post anti–Planned Parenthood videos and flooded thousands of university printers in America with instructions to print swastikas–a symbol tattooed on his chest. When I asked if I could fly out and interview him, he agreed, though he warned that he “might not be coming ashore for a while, but we can probably pass close enough to land to have you meet us somewhere in the Adriatic or Ionian.” His email signature: “Eternally your servant in the escalation of entropy and eschaton.”

While we planned my trip to “a pretty remote location,” he told me that he no longer does interviews for free and that his rate was two bitcoins (about $1,100) per hour. That’s when one of us started trolling the other, though I’m not sure which:

From: Joel Stein

To: Andrew Auernheimer

I totally understand your position. But TIME, and all the major media outlets, won’t pay people who we interview. There’s a bunch of reasons for that, but I’m sure you know them.

Thanks anyway,

From: Andrew Auernheimer

To: Joel Stein

I find it hilarious that after your people have stolen years of my life at gunpoint and bulldozed my home, you still expect me to work for free in your interests.

You people belong in a f-cking oven.

For a guy who doesn’t want to be interviewed for free, you’re giving me a lot of good quotes!

In a later blog post about our emails, Weev clarified that TIME is “trying to destroy white civilization” and that we should “open up your Jew wallets and dump out some of the f-cking geld you’ve stolen from us goys, because what other incentive could I possibly have to work with your poisonous publication?” I found it comforting that the rate for a neo-Nazi to compromise his ideology is just two bitcoins.

Expressing socially unacceptable views like Weev’s is becoming more socially acceptable. Sure, just like there are tiny, weird bookstores where you can buy neo-Nazi pamphlets, there are also tiny, weird white-supremacist sites on the web. But some of the contributors on those sites now go to places like 8chan or 4chan, which have a more diverse crowd of meme creators, gamers, anime lovers and porn enthusiasts. Once accepted there, they move on to Reddit, the ninth most visited site in the U.S., on which users can post links to online articles and comment on them anonymously. Reddit believes in unalloyed free speech; the site only eliminated the comment boards “jailbait,” “creepshots” and “beatingwomen” for legal reasons.

But last summer, Reddit banned five more discussion groups for being distasteful. The one with the largest user base, more than 150,000 subscribers, was “fatpeoplehate.” It was a particularly active community that reveled in finding photos of overweight people looking happy, almost all women, and adding mean captions. Reddit users would then post these images all over the targets’ Facebook pages along with anywhere else on the Internet they could. “What you see on Reddit that is visible is at least 10 times worse behind the scenes,” says Dan McComas, a former Reddit employee. “Imagine two users posting about incest and taking that conversation to their private messages, and that’s where the really terrible things happen. That’s where we saw child porn and abuse and had to do all of our work with law enforcement.”

Jessica Moreno, McComas’ wife, pushed for getting rid of “fatpeoplehate” when she was the company’s head of community. This was not a popular decision with users who really dislike people with a high body mass index. She and her husband had their home address posted online along with suggestions on how to attack them. Eventually they had a police watch on their house. They’ve since moved. Moreno has blurred their house on Google maps and expunged nearly all photos of herself online.

During her time at Reddit, some users who were part of a group that mails secret Santa gifts to one another complained to Moreno that they didn’t want to participate because the person assigned to them made racist or sexist comments on the site. Since these people posted their real names, addresses, ages, jobs and other details for the gifting program, Moreno learned a good deal about them. “The idea of the basement dweller drinking Mountain Dew and eating Doritos isn’t accurate,” she says. “They would be a doctor, a lawyer, an inspirational speaker, a kindergarten teacher. They’d send lovely gifts and be a normal person.” These are real people you might know, Moreno says. There’s no real-life indicator. “It’s more complex than just being good or bad. It’s not all men either; women do take part in it.” The couple quit their jobs and started Imzy, a cruelty-free Reddit. They believe that saving a community is nearly impossible once mores have been established, and that sites like Reddit are permanently lost to the trolls.

When sites are overrun by trolls, they drown out the voices of women, ethnic and religious minorities, gays–anyone who might feel vulnerable. Young people in these groups assume trolling is a normal part of life online and therefore self-censor. An anonymous poll of the writers at TIME found that 80% had avoided discussing a particular topic because they feared the online response. The same percentage consider online harassment a regular part of their jobs. Nearly half the women on staff have considered quitting journalism because of hatred they’ve faced online, although none of the men had. Their comments included “I’ve been raged at with religious slurs, had people track down my parents and call them at home, had my body parts inquired about.” Another wrote, “I’ve had the usual online trolls call me horrible names and say I am biased and stupid and deserve to be raped. I don’t think men realize how normal that is for women on the Internet.”

The alt-right argues that if you can’t handle opprobrium, you should just turn off your computer. But that’s arguing against self-expression, something antithetical to the original values of the Internet. “The question is: How do you stop people from being a–holes not to their face?” says Sam Altman, a venture capitalist who invested early in Reddit and ran the company for eight days in 2014 after one of its many PR crises. “This is exactly what happened when people talked badly about public figures. Now everyone on the Internet is a public figure. The problem is that not everyone can deal with that.” Altman declared on June 15 that he would quit Twitter and his 171,000 followers, saying, “I feel worse after using Twitter … my brain gets polluted here.”

Twitter’s head of trust and safety, Del Harvey, struggles with how to allow criticism but curb abuse. “Categorically to say that all content you don’t like receiving is harassment would be such a broad brush it wouldn’t leave us much content,” she says. Harvey is not her real name, which she gave up long ago when she became a professional troll, posing as underage girls (and occasionally boys) to entrap pedophiles as an administrator for the website Perverted-Justice and later for NBC’s To Catch a Predator. Citing the role of Twitter during the Arab Spring, she says that anonymity has given voice to the oppressed, but that women and minorities are more vulnerable to attacks by the anonymous.

But even those in the alt-right who claim they are “unf-ckwithable” aren’t really. At some point, everyone, no matter how desensitized by their online experience, is liable to get freaked out by a big enough or cruel enough threat. Still, people have vastly different levels of sensitivity. A white male journalist who covers the Middle East might blow off death threats, but a teenage blogger might not be prepared to be told to kill herself because of her “disgusting acne.”

Which are exactly the kinds of messages Em Ford, 27, was receiving en masse last year on her YouTube tutorials on how to cover pimples with makeup. Men claimed to be furious about her physical “trickery,” forcing her to block hundreds of users each week. This year, Ford made a documentary for the BBC called Troll Hunters in which she interviewed online abusers and victims, including a soccer referee who had rape threats posted next to photos of his young daughter on her way home from school. What Ford learned was that the trolls didn’t really hate their victims. “It’s not about the target. If they get blocked, they say, ‘That’s cool,’ and move on to the next person,” she says. Trolls don’t hate people as much as they love the game of hating people.

Troll culture might be affecting the way nontrolls treat one another. A yet-to-be-published study by University of California, Irvine, professor Zeev Kain and Amy Jo Martin showed that when people were exposed to reports of good deeds on Facebook, they were 10% more likely to report doing good deeds that day. But the opposite is likely occurring as well. “One can see discourse norms shifting online, and they’re probably linked to behavior norms,” says Susan Benesch, founder of the Dangerous Speech Project and faculty associate at Harvard’s Internet and Society center. “When people think it’s increasingly O.K. to describe a group of people as subhuman or vermin, those same people are likely to think that it’s O.K. to hurt those people.”

As more trolling occurs, many victims are finding laws insufficient and local police untrained. “Where we run into the problem is the social-media platforms are very hesitant to step on someone’s First Amendment rights,” says Mike Bires, a senior police officer in Southern California who co-founded LawEnforcement.social, a tool for cops to fight on-line crime and use social media to work with their communities. “If they feel like someone’s life is in danger, Twitter and Snapchat are very receptive. But when it comes to someone harassing you online, getting the social-media companies to act can be very frustrating.” Until police are fully caught up, he recommends that victims go to the officer who runs the force’s social-media department.

One counter-trolling strategy now being employed on social media is to flood the victims of abuse with kindness. That’s how many Twitter users have tried to blunt racist and body-shaming attacks on U.S. women’s gymnastics star Gabby Douglas and Mexican gymnast Alexa Moreno during the Summer Olympics in Rio. In 2005, after Emily May co-founded Hollaback!, which posts photos of men who harass women on the street in order to shame them (some might call this trolling), she got a torrent of misogynistic messages. “At first, I thought it was funny. We were making enough impact that these losers were spending their time calling us ‘cunts’ and ‘whores’ and ‘carpet munchers,'” she says. “Long-term exposure to it, though, I found myself not being so active on Twitter and being cautious about what I was saying online. It’s still harassment in public space. It’s just the Internet instead of the street.” This summer May created Heartmob, an app to let people report trolling and receive messages of support from others.

Though everyone knows not to feed the trolls, that can be challenging to the type of people used to expressing their opinions. Writer Lindy West has written about her abortion, hatred of rape jokes and her body image–all of which generated a flood of angry messages. When her father Paul died, a troll quickly started a fake Twitter account called PawWestDonezo, (“donezo” is slang for “done”) with a photo of her dad and the bio “embarrassed father of an idiot.” West reacted by writing about it. Then she heard from her troll, who apologized, explaining that he wasn’t happy with his life and was angry at her for being so pleased with hers.

West says that even though she’s been toughened by all the abuse, she is thinking of writing for TV, where she’s more insulated from online feedback. “I feel genuine fear a lot. Someone threw a rock through my car window the other day, and my immediate thought was it’s someone from the Internet,” she says. “Finally we have a platform that’s democratizing and we can make ourselves heard, and then you’re harassed for advocating for yourself, and that shuts you down again.”

I’ve been a columnist long enough that I got calloused to abuse via threats sent over the U.S. mail. I’m a straight white male, so the trolling is pretty tame, my vulnerabilities less obvious. My only repeat troll is Megan Koester, who has been attacking me on Twitter for a little over two years. Mostly, she just tells me how bad my writing is, always calling me “disgraced former journalist Joel Stein.” Last year, while I was at a restaurant opening, she tweeted that she was there too and that she wanted to take “my one-sided feud with him to the next level.” She followed this immediately with a tweet that said, “Meet me outside Clifton’s in 15 minutes. I wanna kick your ass.” Which shook me a tiny bit. A month later, she tweeted that I should meet her outside a supermarket I often go to: “I’m gonna buy some Ahi poke with EBT and then kick your ass.”

I sent a tweet to Koester asking if I could buy her lunch, figuring she’d say no or, far worse, say yes and bring a switchblade or brass knuckles, since I have no knowledge of feuding outside of West Side Story. Her email back agreeing to meet me was warm and funny. Though she also sent me the script of a short movie she had written (see excerpt, left).

I saw Koester standing outside the restaurant. She was tiny–5 ft. 2 in., with dark hair, wearing black jeans and a Spy magazine T-shirt. She ordered a seitan sandwich, and after I asked the waiter about his life, she looked at me in horror. “Are you a people person?” she asked. As a 32-year-old freelance writer for Vice.com who has never had a full-time job, she lives on a combination of sporadic paychecks and food stamps. My career success seemed, quite correctly, unjust. And I was constantly bragging about it in my column and on Twitter. “You just extruded smarminess that I found off-putting. It’s clear I’m just projecting. The things I hate about you are the things I hate about myself,” she said.

As a feminist stand-up comic with more than 26,000 Twitter followers, Koester has been trolled more than I have. One guy was so furious that she made fun of a 1970s celebrity at an autograph session that he tweeted he was going to rape her and wanted her to die afterward. “So you’d think I’d have some sympathy,” she said about trolling me. “But I never felt bad. I found that column so vile that I thought you didn’t deserve sympathy.”

When I suggested we order wine, she told me she’s a recently recovered alcoholic who was drunk at the restaurant opening when she threatened to beat me up. I asked why she didn’t actually walk up to me that afternoon and, even if she didn’t punch me, at least tell me off. She looked at me like I was an idiot. “Why would I do that?” she said. “The Internet is the realm of the coward. These are people who are all sound and no fury.”

Maybe. But maybe, in the information age, sound is as destructive as fury.

Editor’s Note: An earlier version of this story included a reference to Asperger’s Syndrome in an inappropriate context. It has been removed. Additionally, an incorrect description of Megan Koester’s sexual orientation has been removed. The original version also omitted an author of a study about Facebook and good deeds.

More Must-Reads from TIME

  • The Rise of a New Kind of Parenting Guru
  • The 50 Best Romance Novels to Read Right Now
  • Mark Kelly and the History of Astronauts Making the Jump to Politics
  • The Young Women Challenging Iran’s Regime
  • How to Be More Spontaneous As a Busy Adult
  • Can Food Really Change Your Hormones?
  • Column: Why Watching Simone Biles Makes Me Cry
  • Get Our Paris Olympics Newsletter in Your Inbox

Contact us at [email protected]

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

Internet trolls, by definition, are disruptive, combative and often unpleasant with their offensive or provocative online posts designed to disturb and upset.

Under the right circumstances, just about anybody can become an Internet troll, according to Stanford research. (Image credit: wildpixel / Getty Images)

The common assumption is that people who troll are different from the rest of us, allowing us to dismiss them and their behavior. But research from Stanford University and Cornell University, published as part of the upcoming 2017 Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017), suggests otherwise. The research offers evidence that, under the right circumstances, anyone can become a troll.

“We wanted to understand why trolling is so prevalent today,” said Justin Cheng, a computer science researcher at Stanford and lead author of the paper. “While the common knowledge is that trolls are particularly sociopathic individuals that occasionally appear in conversations, is it really just these people who are trolling others?”

Taking inspiration from social psychology research methods, Cheng investigated whether trolling behavior is an innate characteristic or if situational factors can influence people to act like trolls. Through a combination of experimentation, data analysis and machine learning, the researchers honed in on simple factors that make the average person more likely to troll.

Becoming a troll

Following previous research on antisocial behavior, the researchers decided to focus on how mood and context affect what people write on a discussion forum. They set up a two-part experiment with 667 subjects recruited through a crowdsourcing platform.

In the first part of the experiment, the participants were given a test, which was either very easy or very difficult. After taking the tests, all subjects filled out a questionnaire that evaluated various facets of their mood, including anger, fatigue, depression and tension. As expected, the people who completed the difficult test were in a worse mood than those who had the easy test.

All participants were then instructed to read an article and engage in its comment section. They had to leave at least one comment, but could leave multiple comments and up-votes and down-votes and could reply to other comments. All participants saw the same article on the same platform, created solely for the experiment, but some participants were given a forum with three troll posts at the top of the comment section. Others saw three neutral posts.

Two independent experts evaluated whether the posts left by subjects qualified as trolling, defined generally in this research by a combination of posting guidelines taken from several discussion forums. For example, personal attacks and cursing were indicative of troll posts.

About 35 percent of people who completed the easy test and saw neutral posts then posted troll comments of their own. That percentage jumped to 50 percent if the subject either took the hard test or saw trolling comments. People exposed to both the difficult test and the troll posts trolled approximately 68 percent of the time.

The spread of trolling

To relate these experimental insights to the real world, the researchers also analyzed anonymized data from CNN’s comment section from throughout 2012. This data consisted of 1,158,947 users, 200,576 discussions and 26,552,104 posts. This included banned users and posts that were deleted by moderators. In this part of the research, the team defined troll posts as those that were flagged by members of the community for abuse.

It wasn’t possible to directly evaluate the mood of the commenters, but the team looked at the time stamp of posts because previous research has shown that time of day and day of week correspond with mood. Incidents of down-votes and flagged posts lined up closely with established patterns of negative mood. Such incidents tend to increase late at night and early in the week, which is also when people are most likely to be in a bad mood.

The researchers investigated the effects of mood further and found that people were more likely to produce a flagged post if they had recently been flagged or if they had taken part in a separate discussion that merely included flagged posts written by others. These findings held true no matter what article was associated with the discussion.

“It’s a spiral of negativity,” explained Jure Leskovec , associate professor of computer science at Stanford and senior author of the study. “Just one person waking up cranky can create a spark and, because of discussion context and voting, these sparks can spiral out into cascades of bad behavior. Bad conversations lead to bad conversations. People who get down-voted come back more, comment more and comment even worse.”

Predicting bad behavior

As a final step in their research, the team created a machine-learning algorithm tasked with predicting whether the next post an author wrote would be flagged.

The information fed to the algorithm included the time stamp of the author’s last post, whether the last post was flagged, whether the previous post in the discussion was flagged, the author’s overall history of writing flagged posts and the anonymized user ID of the author.

The findings showed that the flag status of the previous post in the discussion was the strongest predictor of whether the next post would be flagged. Mood-related features, such as timing and previous flagging of the commenter, were far less predictive. The user’s history and user ID, although somewhat predictive, were still significantly less informative than discussion context. This implies that, while some people may be consistently more prone to trolling, the context in which we post is more likely to lead to trolling.

Troll prevention

Between the real-life, large-scale data analysis, the experiment and the predictive task, the findings were strong and consistent. The researchers suggest that conversation context and mood can lead to trolling. They believe this could inform the creation of better online discussion spaces.

“Understanding what actually determines somebody to behave antisocially is essential if we want to improve the quality of online discussions,” said Cristian Danescu-Niculescu-Mizil, assistant professor of information science at Cornell University and co-author of the paper. “Insight into the underlying causal mechanisms could inform the design of systems that encourage a more civil online discussion and could help moderators mitigate trolling more effectively.”

Interventions to prevent trolling could include discussion forums that recommend a cooling-off period to commenters who have just had a post flagged, systems that automatically alert moderators to a post that’s likely to be a troll post or “shadow banning,” which is the practice of hiding troll posts from non-troll users without notifying the troll.

The researchers believe studies like this are only the beginning of work that’s been needed for some time, since the Internet is far from being the worldwide village of cordial debate and discussion people once thought it would become.

“At the end of the day, what this research is really suggesting is that it’s us who are causing these breakdowns in discussion,” said Michael Bernstein , assistant professor of computer science at Stanford and co-author of the paper. “A lot of news sites have removed their comments systems because they think it’s counter to actual debate and discussion. Understanding our own best and worst selves here is key to bringing those back.”

This work was supported in part by Microsoft, Google, the National Science Foundation, the Army Research Office, the U.S. Department of Defense, the Stanford Data Science Initiative, Boeing, Lightspeed, SAP and Volkswagen.

Media Contacts

Taylor Kubota, Stanford News Service: (650) 724-7707,  [email protected]

Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2016 > Internet Trolls and the Ones Who Love Them

Internet trolls and the ones who love them, october 28, 2016.

The formula for mass internet outrage is increasingly nebulous; we never know what will set off the next online frenzy. But Milo Yiannopoulos, senior editor at Breitbart, seems to have it all figured out. As a particularly vocal voice of the alt-right movement, he has actually carved out a niche market for himself by exploiting the volatile, at times fickle cycles of online outrage. He has developed an audience by routinely saying outrageous things in protest of a culture he considers to be too mired in political correctness. Not surprisingly, he is a proud supporter of Republican presidential candidate Donald Trump.

Yiannopoulos claims to detest what he perceives to be oversensitivity, while thriving on the recreational outrage culture he so often provokes. This provocation is precisely how he gets attention, and it’s simply not plausible that he actually believes some of the outrageous things he says. Yiannopoulos is a troll. That's not meant to be an insult; it's just the best term to describe what he does for a living, because in no universe could his actions be considered journalism. That is not to say Yiannopoulos’ asinine commentary does not have broader implications. The fact that he has amassed something of a cult following is evidence enough that his rhetoric is attractive to a specific segment of conservatives.

After the release of the new “Ghostbusters” film, Yiannopoulos wrote a review titled “ Teenage Boys with Tits: Here’s my problem with ‘Ghostbusters .’” Following the publication of Yiannopoulos’ dismal review, star Leslie Jones was bombarded with a large number of hateful tweets, many of which were racially charged. Yiannopoulos himself later joined in on the attacks against the actress. In one tweet, he described Jones as “barely literate.” Jones spent the day retweeting the most vitriolic messages, and later announced that she was leaving the platform due to her negative experience. Twitter CEO Jack Dorsey invited Jones to direct message him about the situation. Subsequently, Yiannopoulos, along with many others involved in the harassment, were  permanently banned  from the platform. Twitter released a  statement : “…no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others … We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree.” Vox published an extremely thorough  play-by-play  of the whole situation.

Yiannopoulos accused Twitter of banning him for political reasons. He claimed that Twitter allows jihadists to use the social media platform but silences conservatives. In one comment published via Brietbart, Yiannopoulos  said , "With the cowardly suspension of my account, Twitter has confirmed itself as a safe space for Muslim terrorists and Black Lives Matter extremists, but a no-go zone for conservatives." This remark was part of his broader strategy to paint liberals as tone deaf and irresponsible on the matter of global terrorism. It is a rhetorically effective strategy, but there are a number of specious assumptions baked into such claims. Yiannopoulos’ argument is a classic case of apples and oranges. Is it true that Twitter “allows” jihadists on their platform? One wonders if Islamic terrorists and other religious extremists should be presented with a box to check confirming their intentions before signing up for a service like Twitter. Would they check such a box? Probably not. So, it’s a distinct and challenging problem for Twitter to identify those ill-intentioned users. If anything, Yiannopoulos and his army of trolls make this problem decidedly burdensome, because Twitter not only must devote time and resources to removing jihadists, but it also has to deal with cases of harassment.

In February, Twitter released a statement that it had  shut down  125,000 accounts related to the terrorist group ISIS. Twitter also stated that the company had significantly bulked up the team responsible for fighting such activity. According to the Obama administration, traffic related to terrorist accounts on Twitter has  decreased 45 percent  over the past two years. Still, new accounts are created frequently. It’s clear that this is an ongoing problem that needs constant attention. But why does that in turn have any implications whatsoever on the situation with Yiannopoulos? They are separate issues. In practical terms, this type of response is a non sequitur. It’s akin to a shoplifter who gets caught claiming to be treated unfairly on the basis that the cops are casually allowing murderers to go free, as if the shoplifter is the best source of information on that matter. It may be true that murders are occurring, but that does not mean the police are “allowing” it to happen. Cops are not omnipresent. They cannot stop all murders. Does that fact mean arresting the shoplifter is unfair treatment? Does it mean shoplifters should be given free rein to steal whatever they please while police officers devote all their resources to finding murderers? No reasonable person would make that claim. The same logic applies to Yiannopoulos’ complaint about Twitter. It might be true that Twitter should do more to stop jihadists from using its platform, but that argument is irrelevant to the matter at hand. The true horror of jihadism doesn’t make online harassment any less repugnant and disgraceful.

Let's take seriously the claim that Twitter banning Yiannopoulos violates his freedom of speech. As advocates of free speech often note, the most extreme cases are what truly test our dedication to the First Amendment. For instance, even though most people hate doing so, we've begrudgingly tolerated the Westboro Baptist Church holding demonstrations near the funerals of fallen soldiers. Yes, there is near uniform agreement that the Westboro Baptist Church’s actions are consistently and utterly despicable, but so long as the protesters maintain a reasonable distance from the actual funeral so as not to interfere with the event, they are exercising their right to peaceably assemble, and the government cannot prohibit them from doing so. That would be a valid point in Milo's case, if he were a U.S. citizen being prosecuted by the U.S. government. But he's not. He was banned from a widely used social media platform for purportedly violating the site’s terms of service. There's a major difference between the government’s response to the Westboro Baptist Church and Twitter’s response to Yiannopoulos. However, if you took Yiannopoulos' claim at his word, you'd be compelled to believe this whole situation is an untenable, outrageous violation of human rights on par with, say, the  imprisonment of Ai Weiwei  in China for his vocal criticism of the government. But, of course, those two situations are not even in the same ballpark. They’re not even on the same planet.

Is there a time and place for trolls in this ever-changing digital landscape? Probably not, but that's a topic for another time. The point here is that this is not a free speech issue. The United States has some of the most enduring, robust protections of free speech relative to any other industrialized nation in the world. Even under such protections – even given the most charitable version of events in Yiannopoulos' favor – Twitter banning Yiannopoulos could not be construed as an infringement of free speech. Individuals do not have the right to exposure on a corporation’s platform; Twitter is not legally bound to serve as a host for Yiannopoulos’ hateful rhetoric. A world in which that were the case would be a very absurd world indeed. The fact that Yiannopoulos clearly feels entitled to such exposure is just further proof that his position is virtually untenable. It was untenable before the advent of the internet, and it's untenable now. His banning is akin to a troll being banned from any typical online forum for whatever reason. The fact that Twitter is a large platform does not change the underlying principles at play.

There are a number of well-founded reasons why an organization like the  American Civil Liberties Union  (ACLU), one that typically champions free speech, is not decrying the fact that Yiannopoulos was issued a lifetime ban from Twitter. Yiannopoulos would be quick to say it's because the ACLU has a distinct liberal bias, and for the sake of brevity, let's assume that's true. Still, it is certainly safe to assume that this event's historical significance is utterly trivial. Milo Yiannopoulos getting banned from Twitter is somewhere between Phil Robertson being  suspended  from the TV show “Duck Dynasty” for making homophobic comments and certain businesses severing relations with Paula Deen after her unfortunate use of a certain  racial epithet . There are an astounding number instances of actual infringements on free speech in the world. To mention these instances in the same breath as the situation concerning Yiannopoulos would not only be specious, it would be laughable. Despite any perceived biases, it's pertinent to make it clear to anyone reading this that the ACLU does more for the cause of free speech in a day than Yiannopoulos has done throughout his entire career as a “provocateur.”

Twitter’s ban on Yiannopoulos is simply not important. It’s not even on the ACLU's radar, and even if it were, the ACLU wouldn’t care. What is important is that people understand the meaning of free speech as a set of nuanced ethical principles. A TV personality making asinine remarks and being publicly chided for it is not the same thing as an infringement of free speech. An online personality engaging in systematic harassment of some unsuspecting individual on a social media platform and consequently being banned from said platform does not constitute a violation of free speech.

If we are to think about the right to free speech as an assurance for one to freely engage in a marketplace of ideas, it makes sense to make a distinction between access to the marketplace itself and access to platforms readily available to the public on said market. It's easy to conflate those two concepts. Conservative rabble-rousers seem to do it quite often. Let's imagine that you're in business as, say, a tire manufacturer in the United States. Now, it is certainly your right to start a business, should you have the means and wherewithal to do so. There are laws protecting your right to go through the startup process, and there are laws preventing others from engaging in coercive and violent acts to harm your business. Hypothetically, if someone were to set fire to your factory, or even attempt to spread libelous rumors about your business, you could take them to court. Granted, libel cases are difficult, but if you could prove that someone was intentionally spreading harmful lies about your business, and that your business suffered as a result, you'd likely have a good case. However, if there's a rubber supplier that refuses to work with you for ambiguous reasons, there's little legal recourse. Moreover, if a landowner refuses to sell you a space for a warehouse, assuming that individual is not discriminating against you based on your affiliation with a protected class (and it's important to note here that political affiliation is generally not considered a protected class), again, you have no legal recourse. You cannot force someone do business with you, nor should you be allowed to do so.

Let's try to think about the ban from Twitter's perspective. If the company indeed banned Yiannopoulos for political reasons, said banning would be somewhat self-defeating; it would play right into the narrative that Yiannopoulos espouses, which is that he's a downtrodden hero for the free-speech crowd. He's playing that card already. The fact remains, though, that Yiannopoulos has not been silenced. In actuality, he's been emboldened by this whole episode. Yiannopoulos actually publicly  thanked  Twitter for banning him, because he believes the whole situation has generated buzz about him. As questionable as some of Twitter’s actions have been in the past, we can at least assume the company was smart enough to know this publicity was inevitable. Yet, Twitter still decided to ban him. It logically follows that the social media giant thought it was a worthwhile decision, regardless of whether Yiannopoulos derived some glory from it. Why? Twitter is beholden to its shareholders and its user base — a  young , predominantly  liberal  user base. This is not a crime, and it’s not unethical. It’s just a fact.

Let’s dispel another specious proposition: Twitter should not be heralded as a champion against online bullies. This ban likely wasn't an action Twitter took out of empathy for Leslie Jones, although it is entirely possible Dorsey did empathize with her. After all, Jones was systematically and relentlessly harassed, essentially for doing her job. What she experienced was absurd. But whether or not the executives at Twitter felt empathy for Jones is likely to be incidental, at least as far as it influenced the company’s decision to take action. Corporations rarely deal in empathy as a currency, except when it affects their bottom line. Ironically, this is a point that traditional business conservatives tend to see as intuitive. It increasingly seems that the ban was in part a public relations move, but one deeply rooted in pragmatism. Twitter banned Yiannopoulos due to pressure from users, due to a need to combat the increased perception that it does not adequately handle harassment, and most importantly, due to the fact that Yiannopoulos clearly did violate the platform’s terms of service. In what universe can Twitter’s choice be construed as unethical? There is a distinction between a company taking an action for purely ideological reasons and a company taking actions for reasons primarily related to ensuring smooth operations and its continued survival. Yiannopoulos unwittingly gave Twitter an ideal pretext to make an example out of him, and the company made an executive decision to do so.

If that sounds cynical, consider a hypothetical scenario in which Twitter's board, or whoever has influence at the company, does have an established liberal agenda, and that there's a direct prerogative at the company to silence any voices of dissent, i.e. “conservatives.” If that were really true, why is it generally only the fringe, relatively  extreme cases  that Twitter acts upon? Bill O’Reilly doesn’t have any trouble from the administrators on Twitter, nor does Sean Hannity, Anne Coulter or Glenn Beck. Couldn't it just be that most professional pundits, left or right, have the good sense not to engage in harassment and needlessly inflammatory behavior on Twitter? Could it be that lesser known fringe commentators and other trolls are simply more prone to violate Twitter's terms of service? Some statistics might put this into perspective. Generally, the more education a person has, the more likely they are to hold  predominantly liberal positions  on a wide number of issues. This positive ascription has a somewhat impolite inverse: The largely homogeneous intersection of voters that identifies with Trump, and in turn the alt-right movement as a whole, tends to be  less educated . It’s an uncomfortable truth with which we must reckon.

That folks of the alt-right persuasion are typically less educated is not a fact to be celebrated by “enlightened” liberals or arrogantly held over the heads of Trump supporters. It's data that ought to be bemoaned by anyone who values civil discourse, specifically in the online realm where anonymity continues to reign supreme. People who find themselves roped in by alt-right rhetoric are being exploited. They have not been trained to recognize fallacious reasoning; Yiannopoulos is just one of the unscrupulous talking heads speaking to them on their level. So, it shouldn't be a surprise when his tactics strategically appeal to such a demographic. He's doing what a businessman does: exploiting a niche market. Unfortunately, as long as there is an ambient level of ignorance in the world, there is strong a market for trolls such as Yiannopoulos. The real world operates by market forces. Yiannopoulos is beholden to his demographic, and Twitter is beholden to its own. The situation is really that simple. This is merely a case in which the interests of two demographics were at odds with one another. It’s essentially free speech in action, on a macro level.

David Stockdale  is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine . His fictional work has appeared in Electric Rather , The Commonline Journal,  Midwest Literary Magazine and Go Read Your Lunch .  Two of his essays are featured in  A Practical Guide to Digital Journalism Ethics . David can be reached at  [email protected]  or via his website .

Research & Initiatives

Return to top.

  • Support LUC
  • Directories
  • Symposia Archive
  • Upcoming Events
  • Past Events
  • Publications
  • CDEP in the News

© Copyright & Disclaimer 2024

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Troll story: The dark tetrad and online trolling revisited with a glance at humor

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft

* E-mail: [email protected]

Affiliation School of Management, Professorship for Digital Marketing, Technical University of Munich, Heilbronn, Germany

ORCID logo

Roles Resources, Supervision, Writing – review & editing

Affiliations LMU Center for Leadership and People Management, LMU Munich, Munich, Germany, Department of Infection Prevention and Infectious Diseases, University Hospital Regensburg, Regensburg, Germany

Roles Writing – review & editing

Affiliation MIT AgeLab, Massachusetts Institute of Technology, Cambridge, MA, United States of America

Roles Conceptualization, Methodology, Project administration, Resources, Supervision, Writing – review & editing

Affiliations LMU Center for Leadership and People Management, LMU Munich, Munich, Germany, Department of Business Psychology, Augsburg University of Applied Sciences, Augsburg, Germany

  • Sara Alida Volkmer, 
  • Susanne Gaube, 
  • Martina Raue, 

PLOS

  • Published: March 10, 2023
  • https://doi.org/10.1371/journal.pone.0280271
  • Peer Review
  • Reader Comments

Table 1

Internet trolling is considered a negative form of online interaction that can have detrimental effects on people’s well-being. This pre-registered, experimental study had three aims: first, to replicate the association between internet users’ online trolling behavior and the Dark Tetrad of personality (Machiavellianism, narcissism, psychopathy, and sadism) established in prior research; second, to investigate the effect of experiencing social exclusion on people’s motivation to engage in trolling behavior; and third, to explore the link between humor styles and trolling behavior. In this online study, participants were initially assessed on their personality, humor styles, and global trolling behavior. Next, respondents were randomly assigned to a social inclusion or exclusion condition. Thereafter, we measured participants’ immediate trolling motivation. Results drawn from 1,026 German-speaking participants indicate a clear correlation between global trolling and all facets of the Dark Tetrad as well as with aggressive and self-defeating humor styles. However, no significant relationship between experiencing exclusion/inclusion and trolling motivation emerged. Our quantile regression findings suggest that psychopathy and sadism scores have a significant positive effect on immediate trolling motivation after the experimental manipulation, whereas Machiavellianism and narcissism did not explain variation in trolling motivation. Moreover, being socially excluded had generally no effect on immediate trolling motivation, apart from participants with higher immediate trolling motivation, for whom the experience of social exclusion actually reduced trolling motivation. We show that not all facets of the Dark Tetrad are of equal importance for predicting immediate trolling motivation and that research should perhaps focus more on psychopathy and sadism. Moreover, our results emphasize the relevance of quantile regression in personality research and suggest that even psychopathy and sadism may not be suitable predictors for low levels of trolling behavior.

Citation: Volkmer SA, Gaube S, Raue M, Lermer E (2023) Troll story: The dark tetrad and online trolling revisited with a glance at humor. PLoS ONE 18(3): e0280271. https://doi.org/10.1371/journal.pone.0280271

Editor: Yasin Hasan Balcioglu, Istanbul Bakirkoy Prof Dr Mazhar Osman Ruh Sagligi ve Sinir Hastaliklari Egitim ve Arastirma Hastanesi, TURKEY

Received: June 24, 2022; Accepted: December 25, 2022; Published: March 10, 2023

Copyright: © 2023 Volkmer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data are available from the Open Science Framework database ( https://osf.io/fmpdy/ ).

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

1. Introduction

With widespread access to the internet, different patterns of online behavior emerge. One aspect of online communication that has begun to receive more attention from social science researchers is trolling behavior, which refers to disruptive and tactically aggressive online behavior [ 1 ]. Many people have fallen victim to trolling and have experienced a wide range of negative psychological problems as a result [ 2 , 3 ], which may explain the high level of public agreement about the detrimental effect of online trolls among internet users [ 4 ].

Research has shown a relationship between trolling others online and dark personality traits such as sadism, psychopathy, Machiavellianism, and narcissism [ 5 – 8 ]. However, the role of situational variables which may also influence trolling behavior has so far been neglected, although some studies highlight the importance of context-specific predictors for explaining online trolling behavior [ 9 – 12 ].

1.1. Online trolling: Definition and relevance

A troll is defined as a person who (a) starts and/or exacerbates disruptive conflict online for their amusement; (b) is often deceptive, as they tend to have second social media accounts used for trolling; (c) is tactically aggressive to increase emotional responses; and (d) disturbs regular discussions on online platforms to seek attention [ 1 ]. Approaches to trolling other internet users with malicious intent can include veering a conversation off-topic as well as being deliberately controversial, offensive, or inflammatory [ 13 ]. Why are people becoming trolls? One explanation is that the internet can facilitate disinhibition [ 14 , 15 ] which positively predicts cyberaggression [ 16 ]. According to an early study on trolls [ 17 ], users engage with trolling because they are bored, seek attention or revenge, and find it funny to create trouble for platforms and other users. To create the desired disruption, trolls may write messages that are (a) outwardly sincere, (b) deliberately designed to provoke, and (c) a waste of time through fruitless arguments [ 18 ]. At times, the media and scholars conflate trolling with any negative behavior that occurs online, e.g., cyberbullying, parody, or flaming, when the definition of trolling should be limited to social phenomena “performed individually or collectively in varying online contexts, which involves the use of antagonism, deception and vigilantism […] to provoke reactions from people or institutions” (p. 1,078) [ 19 ].

Notably, the above describes a kind of trolling behavior that aims to negatively affect other users and online discussions. However, trolling can occur in a more light-hearted or even amicable way, e.g., Sanfilippo and colleagues [ 20 ] differentiate between serious and humorous trolling. This differentiation is also highlighted by the distinction participants drew between circumstantial trolling and trolls who are committed to irritating and iterating their actions [ 20 ]. Here, we are interested in the malevolent troll as defined by Hardaker [ 1 ] because of their high relevance to society due to their anti-social behaviors [ 20 ]. In fact, being the victim of trolling is widespread: On Facebook alone, 88% of U.K. teenagers reported that they had been bullied or trolled [ 2 ], and 77% of U.S. adults reported harassment [ 3 ]. Online harassment can lead to anxiety, sleeping problems, and even suicidal thoughts [ 3 ], and 66% of U.S. adult internet users strongly agree that internet trolls are detrimental to society [ 4 ]. In a survey of British Members of Parliament, all respondents reported experiences of being trolled and women reported consequent concerns about their safety [ 21 ].

Some predictors of trolling behavior have become apparent in the literature. For example, research suggests that men are more likely to troll others than women [ 5 , 8 , 22 ] but this gender difference is not found in all studies [ 7 ]. Moreover, some studies found that younger people are more likely to troll others [ 22 , 23 ]. The literature has also showcased that personality is a promising predictor of trolling behavior. The following section will go into more detail about personality traits that predict online trolling. First, we will introduce research on the Dark Tetrad and online trolling; then, we will discuss a link between trolling and humor. Finally, we will outline why situational factors, especially social exclusion, may lead to trolling behavior.

1.2. The dark tetrad and online trolling

Prior research on online anti-social behavior asked, ‘who trolls others?’, sparking investigations of the trolls’ personalities. Specifically, research has confirmed the link between trolling behavior and the Dark Tetrad traits (sadism, psychopathy, Machiavellianism, and narcissism). Narcissism refers to excessive self-love and a grandiose sense of self-importance [ 24 ]; Machiavellianism refers to the willingness to manipulate others [ 25 ]; subclinical psychopathy refers to fearless dominance and disinhibition [ 26 ]; and sadism refers to intentionally inflicting psychological/physical pain for enjoyment or power [ 27 ]. These four correlated, theoretically distinct traits share a core of callous manipulation [ 28 ]. Moreover, the Dark Tetrad facets are associated with self-reported, observer-reported, and behavioral aggression [ 28 ]. Indeed, research has confirmed the link between people scoring high on the Dark Tetrad traits and trolling behavior [ 5 – 8 , 23 , 29 – 31 ] which may be due to lower affective empathy in these individuals [ 32 – 34 ], a tendency for moral disengagement [ 35 ], and reduced behavioral inhibition anxiety [ 36 ]. Moreover, all four facets are positively associated with dominance [ 37 ] and social dominance orientation is also associated with past trolling and acceptance of trolling [ 38 ]. Another reason for the association between the Dark Tetrad and trolling behavior could be intrinsic enjoyment: Research indicates that sadism, psychopathy, Machiavellianism, and narcissism correlate positively with one’s enjoyment of viewing violent stimuli [ 36 ]. Sadism is also related to experiencing greater pleasure during an aggression [ 39 ]. Moreover, a recent meta-analysis suggests that the relationship between sadism and aggressive behavior is stronger in online settings, perhaps due to anonymity [ 40 ]. Psychopathy specifically may also be related to trolling behavior because of its association with impulsivity [ 26 , 41 ]. In the case of narcissism, Vize and colleagues [ 42 ] showed that antagonism primarily drives aggressive behaviors, although all narcissistic dimensions are related to aggressive behavior [ 43 ].

Table 1 showcases the correlates of trolling constructs with the Dark Tetrad in previous research. Overall, a clear pattern emerges, with higher scores on the Dark Tetrad facets being related to more self-reported trolling activities. This pattern has been confirmed for sadism in a recent meta-analysis which revealed a pooled correlation between everyday sadism and online trolling of .52 [ 40 ].

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0280271.t001

Based on this research, we aimed to confirm previous correlational findings:

H1: The Dark Tetrad is positively associated with global trolling behavior.

H1a: Machiavellianism is positively associated with global trolling behavior.

H1b: Narcissism is positively associated with global trolling behavior.

H1c: Psychopathy is positively associated with global trolling behavior.

H1d: Sadism is positively associated with global trolling behavior.

1.3. Humor styles and online trolling

This study also investigated the relationship between humor styles and trolling behavior. Specifically, we aimed to investigate humor as conceptualized in the humor styles questionnaire (HSQ) [ 45 ]. The theory of the HSQ assumes that humor can be adaptive or maladaptive for well-being and that people use humor to enhance the self and/or their relationships with others [ 45 ]. Aggressive humor refers to humor that enhances oneself while hurting others, while self-defeating humor is detrimental to oneself and is used to improve relationships [ 46 ]. Meanwhile, affiliative humor and self-enhancing humor are detrimental neither to oneself nor others; more specifically, affiliative humor is used to improve relationships while self-enhancing humor aims at enhancing oneself [ 46 ].

In two previous studies, active trolls stated that they engage in trolling behavior for their enjoyment [ 11 ] and instant entertainment as well as gratification [ 47 ]. This suggests that internet users’ humor styles may also affect whether they troll others online. Indeed, one study [ 48 ] recently showed that trolling behavior is associated with aggressive humor as well as katagelasticism (i.e., the joy of laughing at others). Moreover, self-enhancing and self-defeating humor were both related to trolling behavior in that study. Some authors suggest that katagelasticism may be a cause of trolling behavior [ 49 ]. Additionally, aggressive humor is positively associated with the readiness to be verbally aggressive [ 50 ] which, we expect, may be expressed in trolling. In fact, sarcasm and mockery (aggressive forms of humor [ 51 ]) can be tactics trolls use to disrupt discussions [ 20 , 52 ].

Previous research has also shown that humor styles are associated with the Dark Tetrad. The four dark personality traits are linked with inadequate humor [ 32 ], e.g., schadenfreude in social, academic, and mourning contexts [ 35 , 53 ]. Moreover, humor research indicates that the Dark Tetrad facets are linked to how people utilize and enjoy humor [ 54 ], e.g., higher Machiavellianism and subclinical psychopathy scores are associated with aggressive humor [ 55 , 56 ]. Specifically, psychopathy appears to have a stronger connection with aggressive humor and katagelasticism than Machiavellianism and narcissism [ 54 ]. Sadism also uniquely explains variance in katagelasticism beyond the Dark Triad [ 54 ]. Importantly, katagelasticism not only involves enjoying laughing at others but also actively seeking out situations where one can ridicule others [ 54 ]. We believe that internet trolling may be an expression of katagelasticism in people who score highly on the Dark Tetrad facets. In sum, it is likely that trolling behavior is associated with more aggressive humor and this link may exist because people who score highly on the Dark Tetrad use humor differently than people who score lower on the Dark Tetrad. Based on the prior research, we had one further hypothesis and one research question:

H2: Aggressive humor is positively associated with trolling behavior.

RQ: How do affiliative, self-defeating, and self-enhancing humor relate to trolling behavior?

1.4. Social exclusion and online trolling

While findings of the Dark Tetrad traits and internet trolling are relatively consistent across studies, it is important to keep in mind that other variables may be of importance as well: Naturally, behaviors can be influenced by individual (e.g., a sadistic personality) and situational (e.g., being exposed to aggressive behaviors from others) factors and this should also be the case for trolling behavior. For example, in a study in which trolling behavior was assessed by rating comments the participants made, the authors found that negative mood (and exposure to troll comments) triggers trolling behavior [ 9 ]. Other studies found that anonymity in chat rooms led to more troll comments than a chat room condition where participants were identifiable [ 12 ]. Additionally, research suggests that boredom in life [ 57 ] and loneliness [ 58 ] predict trolling. These studies highlight the importance of situational factors (here negative mood, exposure to trolls, anonymity, boredom, and loneliness) in the context of internet trolls.

To the authors’ best knowledge, most studies on trolling behavior neglect the investigation of external variables; however, one experiment showed that participants who were socially excluded through a mobile phone text messaging set-up with two other people wrote more provocative messages afterward [ 10 ]. Thus, one of the situational factors that might increase trolling behavior could be social exclusion: Revenge can be a motivation for trolling behavior [ 11 ] and aggressive responses to rejection can occur, for example, to regain control [ 59 – 61 ]. This aligns with the meta-analytic finding that the relationship between narcissism and aggression is stronger after provocation [ 43 ]. One recent study [ 62 ] showed that social exclusion compared to social inclusion leads to significantly higher cyberaggression in narcissistic individuals. Moreover, cyberaggression has been found to relate to loneliness and being less socially accepted [ 63 ]. Additionally, in the above-mentioned study, socially excluded participants not only wrote more aggressive text messages but also reported worse mood [ 10 ], which has predicted writing trolling comments in other research [ 9 ]. Importantly, scoring highly on Dark Tetrad traits is associated with difficulties in emotion regulation [ 32 , 64 , 65 ]; hence, we believe that exclusion experiences may be difficult to process for trolls which then may lead to trolling behavior.

Hence, based on the prior research described above [ 10 , 11 , 62 ], we suggest that it might also be possible that feeling excluded motivates people to troll other internet users to avenge themselves. With quick access to social media and the potential of social exclusion occurring online, trolling posts/comments might be an easy way for people who just experienced social exclusion to regain their perceived control [ 59 – 61 ]. Consequently, we intend to add to the base of knowledge of ‘who trolls?’ by also asking ‘when?’. To that end, not only did we investigate global trolling behavior, i.e., a person’s general trolling behavior, but also immediate trolling motivation after an exclusion experience. We hypothesized:

H3: Participants who are socially excluded show increased immediate trolling motivation compared to people who are socially included.

As shown in Table 1 , the Dark Tetrad facets correlate positively with trolling behavior. However, a new pattern emerges when looking at multiple regression results rather than biserial correlates: In prior research, only some of the Dark Tetrad facets explained a significant amount of variance in trolling behavior when looking at partial correlations or multiple regression [ 6 , 8 , 22 , 23 , 58 ]. Specifically, in some studies [ 23 , 66 ], only sadism and psychopathic tendencies explained a significant amount of variance in trolling behavior when all facets of the Dark Tetrad were included in a multiple regression analysis. In contrast, earlier work revealed sadism and Machiavellianism as significant positive predictors of trolling enjoyment, while psychopathy was unrelated and narcissism showed a negative association [ 5 ]. Other research shows significant positive effects of Dark Tetrad facets except for narcissism [ 58 ]. Overall, sadistic tendencies appear to have a greater impact on trolling behavior than the other facets [ 32 ]. Hence, while all facets of the Dark Tetrad appear to correlate with trolling behavior, in multiple regression analyses, psychopathic personality traits as well as sadism appear to be more consistent predictors of internet trolling than Machiavellianism and narcissism. This highlights the need to investigate the differing roles of the Dark Tetrad facets in more detail.

H4: Social exclusion and Machiavellianism, narcissism, psychopathy, and/or sadism can predict immediate trolling motivation.

The present study aims at expanding prior findings by (a) confirming the previously established predictive role of Dark Tetrad traits, (b) investigating the role of humor styles as predictors of global trolling behavior, and (c) testing the impact of a situational variable, namely social exclusion, on immediate trolling motivation. Based on these aims, we conducted an experiment using the Cyberball paradigm and tested the effects of inclusion and exclusion on the participants’ immediate trolling motivation.

2.1. Participants

This study includes 1,026 participants ( M age = 26.46 ( SD age = 5.88); 77.2% female) recruited from four German universities and a popular science website for psychology ( https://www.psychologie-heute.de/aktuelles/studienteilnahme.html ). Our sample size surpasses the necessary sample size of 260 participants required to detect an effect of .18 (at the time of the pre-registration smallest reported effect of the correlations between trolling behavior and the Dark Tetrad [ 22 ]) for an alpha of .05 and a power of .90.

2.2. Materials

Demographic questions assessed participants’ age, gender, sexuality, nationality, and favorite social media platform. It was also assessed if participants had a fake account and, if they had fake accounts, on which platforms and for what purposes.

Global and immediate trolling behavior.

To assess global trolling behavior, we used the revised Global Assessment of Internet Trolling (GAIT-Revised) [ 8 ]. This is an 8-item self-report measure that assesses trolling behavior online (e.g., “Although some people think my posts/comments are offensive, I think they are funny.”). Items are rated on a 5-point scale from 1 ( strongly disagree ) to 5 ( strongly agree ), Cronbach’s alpha = .60. As no validated German versions exist for the GAIT-Revised yet, the authors translated this measure and discussed the item formulations. The translation process involved several revisions whereupon each revision aimed to maximize the accuracy of the translation while simultaneously maximizing the naturalness of the German formulation. All people involved in this process were fluent in English and German and familiar with the concept of internet trolling. Next to the GAIT-Revised as a global measure for trolling behavior, this study also assessed immediate motivation to troll others (Immediate Assessment of Internet Trolling, IAIT). The authors created the IAIT measure by reformulating the items to have them address the present moment (e.g., “Just now, I want to share posts/comments that I think are funny, although some people might think they are offensive.”). This process also involved several steps during which the item formulations were clarified and improved. For this adaptation, as with the GAIT, the authors critically investigated the item formulations for understandability and face validity. Cronbach’s alpha for all items was = .54, by excluding item 6 (“I prefer not to cause controversy or stir up trouble right now”), we achieved a Cronbach’s alpha of .70 for the IAIT.

Dark tetrad.

The Short Dark Triad (SD3) [ 67 , 68 ], a 27-item scale, was used to measure Machiavellianism (e.g., “I like to use clever manipulation to get my way.”, Cronbach’s alpha = .76), narcissism (e.g., “Many group activities tend to be dull without me.”, Cronbach’s alpha = .73), and subclinical psychopathy (e.g., “People who mess with me always regret it.”, Cronbach’s alpha = .71). We used a published, validated German translation of the SD3 [ 68 ]. Items are rated on a 5-point scale from 1 ( strongly disagree ) to 5 ( strongly agree ). To assess sadism, the Comprehensive Assessment of Sadistic Tendencies (CAST) [ 69 ] was used. The CAST is an 18-item self-report measure that assesses sadistic personality. The CAST can be divided into three dimensions: direct verbal sadism (e.g., “I was purposely mean to some people in high school.”), direct physical sadism (e.g., “I enjoy physically hurting people.”), and vicarious sadism (e.g., “In video games, I like the realistic blood spurts.”). Items are rated on a 5-point scale from 1 ( strongly disagree ) to 5 ( strongly agree ). As no validated German versions exist for the CAST yet, the authors translated this measure and discussed the item formulations in several steps to match them as closely as possible to the original English and maximize the naturalness of the German translation. Cronbach’s alpha for the CAST = .75.

Humor styles.

To assess humor styles, we used the Humor Styles Questionnaire (HSQ) [ 46 , 47 ], a 32-item self-report measure, that comprises of four different humor styles: self-enhancing (e.g., “If I am feeling depressed, I can usually cheer myself up with humor.”, Cronbach’s alpha = .84.), affiliative (e.g., “I laugh and joke a lot with my friends.”, Cronbach’s alpha = .81.), aggressive (e.g., “If I don’t like someone, I often use humor or teasing to put them down.”, Cronbach’s alpha = .70.), and self-defeating (e.g., “I often try to make people like or accept me more by saying something funny about my own weaknesses, blunders, or faults.”, Cronbach’s alpha = .75.). Items are rated on a 7-point scale from 1 ( totally disagree ) to 7 ( totally agree ). While no official validation study for the German HSQ exists, there is support for the factorial validity of the German translation we used [ 47 ].

Experimental manipulation.

To manipulate social exclusion, we used the Cyberball paradigm [ 70 ]. Cyberball is an experimental manipulation that leads participants to believe that they are playing an online ball-tossing game with two other study participants. In reality, the behavior of the other players is programmed. In this study, participants were either excluded or included in the ball-tossing game. In the inclusion condition, participants got the ball ten times out of 30 tosses. In the exclusion condition, participants got the ball only one time. Using 30 throws is common practice in social exclusion studies [ 71 ].

2.3. Procedure

This study was preregistered before data collection ( https://osf.io/qsfe5 ) and adheres to Section 15 of the Professional Code of Conduct for Physicians in Bavaria; hence no vote by the Ethics Committee was necessary. All participants provided online informed consent in accordance with the Declaration of Helsinki. Before the study, participants received information about the context of the research, that all information was anonymous and that they were free to discontinue the study at any point in time. To begin the study, participants had to click on a field that read “I agree with these conditions and want to proceed.”. Minors were not allowed to participate in this research. After giving informed consent, participants answered the GAIT-Revised, SD3, CAST, and HSQ and demographic items. Then, participants were randomly allocated to either a social exclusion or inclusion condition using the Cyberball paradigm. After the Cyberball manipulation, participants were asked to rate their immediate motivation to troll others (IAIT). Following this, study subjects received an explanation of the study’s actual goal.

Throughout the survey, we placed three attention-check items. If participants gave wrong answers to these items, the experiment immediately ended and brought the subjects to an explanation page.

2.4. Analysis

Before the analysis, people were excluded when they indicated that they had not taken the questionnaire seriously (participants were asked directly if they had taken the questionnaire seriously), were already familiar with Cyberball or were unable to see the ball-tossing program, and when they were below the age of 18. Additionally, because some answers were randomly missing due to the questionnaire software, we excluded cases listwise for missing data. Finally, because only one person indicated their gender as “other than male or female,” this participant was also excluded from further analysis. This was necessary since no reliable inferences can be drawn from a sample of only one person.

In our pre-registration, we planned to use means of the measures for the analyses. However, after investigating the item loadings of our measures using confirmatory factor analysis (see S1 Appendix ), we were concerned about the validity of the scales when using all items. Consequently, we conducted all analyses twice, once using the means with all items as preregistered and once using means that only included items that had a standardized loading on its factor of at least .40. This also serves as a robustness check of our analyses. We report our analyses with the traditional means in the Results section below. We report the analysis using means without low loading items in the S1 Appendix .

We preregistered an ordinary least square (OLS) multiple regression analysis to test H4 which aimed to predict immediate trolling motivation using social exclusion (yes/no) and the Dark Tetrad (Machiavellianism, narcissism, psychopathy, and sadism). Additionally, age and gender were added as control variables, as those variables have been shown to explain trolling behavior in the past [ 22 ]. Our regression assumption checks showed some violations of the homoscedasticity and normal distribution of errors assumption. Hence, we decided to deviate from our pre-registration and conducted a quantile regression analysis, which allows for residuals to have different variances [ 72 ] and does not assume parametric distributional form (here normal) of the errors [ 73 ]. Consequently, a quantile regression analysis should be better suited for our data. Moreover, prior research has compared simple linear regression with quantile regression for personality trait data and concluded that quantile regression can showcase more nuanced and heterogeneous effects [ 74 , 75 ].

Unlike OLS regression, quantile regression relies on quantiles of the outcome variable which results in several coefficients for a single covariate [ 76 ]. These coefficients are interpreted based on their respective quantile of the outcome variable [ 76 ]. For example, while OLS regression may indicate an average effect of a woman’s partner’s meanness on relationship satisfaction, quantile regression can show that for the 15 th quantile of relationship satisfaction (i.e., the least satisfied women), the partner’s meanness can reduce relationship satisfaction by 0.61 points, while partner’s meanness only reduces relationship satisfaction by 0.14 points for the 85 th quantile (i.e., the most satisfied women) [ 77 ].

Finally, we decided to conduct a dominance analysis that was not pre-registered. We based our decision on criticism of the use of multivariate statistics in Dark Tetrad research [ 78 ]. Dominance analysis can provide indicators of relative predictor importance [ 79 , 80 ]. Dominance analysis calls one predictor more important than another “if it would be chosen over its competitor in all possible subset models where only one predictor of the pair is to be entered” (p. 134) [ 79 ]. The dominance analysis approach should provide more definite answers to the question which of the Dark Tetrad facets are the most relevant to predict trolling than multiple regression. The approach has been used previously to answer similar questions, for example, to estimate which narcissism subcomponent is most important for predicting aggressive behavior [ 42 ]. We consider the dominance analysis here as an additional robustness check. We tested the importance of all regression predictor variables of our H4 model. The analysis can be found in the S1 Appendix .

We used the R-packages “lavaan” [ 81 ] to conduct our confirmatory factor analyses, “quantreg” [ 82 ] to conduct our quantile regression analysis, and “dominanceanalysis” [ 83 ] to conduct our dominance analysis (R version 4.1.2). For the remaining analyses, we used SPSS (version 26). For our quantile regression, our solution of the means without low loading items resulted in a non-unique solution. Restricting the tau range from 5:95 to 10:90 resulted in a unique solution and is reported in the S1 Appendix .

3.1. Sample descriptives

Overall, 1,026 people participated in this study; M age = 26.46 ( SD age = 5.88); 77.2% were female and 22.8% were male. Table 2 describes the study’s sample concerning age and our continuous constructs.

thumbnail

https://doi.org/10.1371/journal.pone.0280271.t002

3.2. Global trolling behavior and personality traits

3.2.1. global trolling behavior and the dark tetrad..

To test our first hypothesis (H1: The Dark Tetrad is positively associated with global trolling behavior) and the respective sub-hypotheses, we looked at the correlations between Dark Tetrad personality scores and global trolling behavior; see Table 3 . Our results indicate that each of the Dark Tetrad personality facets correlates positively and significantly with global trolling behavior. Thus, these findings support H1. Please note that we pre-registered tests for H1 and H2 one-sided but tested two-sided for significance.

thumbnail

https://doi.org/10.1371/journal.pone.0280271.t003

3.2.2. Global trolling behavior and humor styles.

To test our second hypothesis (H2: Aggressive humor is positively associated with trolling behavior) and our research question (How do affiliative, self-defeating, and self-enhancing humor relate to trolling behavior?), we checked the correlations between global trolling behavior and the four humor styles; see Table 3 . As predicted, higher aggressive humor was significantly associated with more global trolling, r (1024) = .36, p < .01. Next to aggressive humor, self-defeating humor was also significantly associated with trolling behavior, r (1024) = .12, p < .01, while affiliative humor and self-enhancing humor showed no significant relationship with trolling. Thus, these findings support H2 and answer our research question.

3.3. Immediate trolling motivation and social exclusion

To test our third hypothesis (H3: Participants who are socially excluded show increased immediate trolling motivation compared to people who are socially included), we conducted a t -test with exclusion (yes/no) as the independent and immediate trolling motivation as the dependent variable. Our result suggests that the experience of exclusion did not significantly impact participants’ immediate trolling motivation, t (1024) = 0.91, p = .37, CI = [-.02; .06]. Thus, findings from this analysis did not support H3.

3.4. Predicting immediate trolling motivation

The quantile regression results to test our fourth hypothesis (H4: Social exclusion and Machiavellianism, narcissism, psychopathy, and/or sadism can predict immediate trolling motivation) are graphically presented in Fig 1 . For example, the third graph in the first row of Fig 1 shows how Machiavellianism is predictive of immediate trolling motivation. The red horizontal line represents the ordinary least square (OLS) coefficient for Machiavellianism, while the x-axis represents the quantiles (at 0.2, 0.4, 0.6, and 0.8) of immediate trolling motivation. Furthermore, the black broken line indicates the coefficients at the respective quantiles, here ranging from the 0.5 to the 0.95 quantile in 0.10 steps. Machiavellianism does not seem to predict immediate trolling motivation for the 0.2 quantile of immediate trolling motivation, while for the 0.6 quantile, Machiavellianism appears to have a positive (albeit non-significant, see Table 4 ) effect on immediate trolling motivation.

thumbnail

Note . Simple linear regression coefficients (red line) and quantile regressions for exclusion, Machiavellianism, narcissism, psychopathy, sadism, age, and gender (male with female as the comparison group) for the dependent variable immediate trolling motivation. The x-axis represents the quantiles for immediate trolling motivation while the y-axis represents the unstandardized coefficients of the respective independent variable.

https://doi.org/10.1371/journal.pone.0280271.g001

thumbnail

https://doi.org/10.1371/journal.pone.0280271.t004

Table 4 provides the quantile regression coefficients and their significance as well as OLS regression results. A comparison between OLS and quantile coefficients reveals some significant differences: For example, the OLS regression coefficient for sadism indicates that for every 1-point increase on the sadism scale, immediate trolling motivation also increases by 0.20. In contrast, the quantile analysis shows that there is no significant relationship between sadism and immediate trolling motivation for the quantiles 0.05 to 0.35.

Unlike the OLS coefficient for social exclusion, the exclusion coefficients for the 0.75 and the 0.85 quantiles are significant and–in contrast to our predictions–negative. This indicates that experiencing social exclusion may reduce immediate trolling motivation for people who are part of the upper immediate trolling motivation quantiles.

Thus, quantile regression allowed us to provide a more nuanced view of the relationship between our independent variables and trolling motivation. Overall, we find no significant effects of exclusion experience, Machiavellianism, and narcissism on immediate trolling motivation. Meanwhile, for the 0.45 quantile and higher quantiles of immediate trolling motivation, psychopathy and sadism appear to become increasingly relevant.

3.5. Robustness of findings

Due to concerns about construct validity, we conducted CFAs and consequently reran our analysis using means that excluded low loading items (see S1 Appendix ). Overall, our robustness analysis replicates the correlational findings and the t -test finding. However, the quantile regressions differ to some degree. After excluding low-loading items, only sadism predicted immediate trolling motivation for higher quantiles of immediate trolling motivation. In other words, neither Machiavellianism nor narcissism or psychopathy uniquely predicted immediate trolling motivation when we controlled for low loading items.

This finding is mirrored by our additional dominance analyses (see S1 Appendix ): When using the traditional means for our dominance analysis, both sadism and psychopathy were the most important predictors for immediate trolling motivation. However, when we used means without low loading items, sadism became the most important predictor and dominated psychopathy with its predictive power.

4. Discussion

In this pre-registered study, we investigated trolling behavior and its association with the Dark Tetrad and humor styles. We found support for our first hypothesis (H1: The Dark Tetrad is positively associated with global trolling behavior): Machiavellianism, narcissism, psychopathy, and sadism showed significant positive correlations with global trolling behavior. Moreover, we found support for our second hypothesis (H2: Aggressive humor is positively associated with trolling behavior) and also observed a positive correlation between global trolling behavior and self-defeating humor. In contrast to our expectations (H3: Participants who are socially excluded show increased immediate trolling motivation compared to people who are socially included), we found no effect of social exclusion experience on immediate trolling motivation. Consequently, we could also only partly accept our fourth hypothesis (H4: Social exclusion and Machiavellianism, narcissism, psychopathy, and/or sadism can predict immediate trolling motivation), as psychopathy and sadism but not Machiavellianism nor narcissism were significant predictors of immediate trolling motivation for the higher quantiles of immediate trolling motivation. Moreover, the findings concerning psychopathy should be considered with caution since sadism was the only significant predictor in our robustness quantile regression. Though the effect of social exclusion was generally non-significant, we found significant and negative effects for the 0.75 and 0.85 quantiles of immediate trolling motivation. We will now discuss these findings in the context of current research.

First, our results validate the correlational association between trolling and dark personality traits in the German-speaking context. However, our correlations were only in the small to moderate range, which somewhat contrasts with prior research in which trolling-personality correlations of up to r = .71 were sometimes reported [ 23 ].

Second, and in contrast to our predictions, experiencing social exclusion did not (always) lead to stronger immediate motivation to troll others. Indeed, social exclusion was a significant predictor of immediate trolling motivation for the 0.75 and the 0.85 immediate trolling motivation quantile, with being excluded appearing to reduce motivation to troll others. In our robustness quantile regression, social exclusion did not predict immediate trolling motivation for any quantile of immediate trolling motivation. This result stands out because exclusion has been shown to lower one’s mood [ 59 ], and a prior study [ 9 ] suggests that bad mood can contribute to trolling behavior. However, it should be noted that the assessment of trolling behavior in the present study differed from the one used by Cheng and colleagues (2017) [ 9 ]. The present study assessed participants’ immediate motivation to troll by relying on a self-report measure. This was done due to technical restrictions in our study design and the survey platform. In contrast, Cheng et al. (2017) asked participants to interact in a comment section under a short news article, and comments were then rated as troll comments (yes/no) by two independent experts. Thus, their results might have occurred because participants had an immediate chance to act on impulses to troll others, whereas subjects in the present study were asked about their intentions. As such, the difference in results may be due to an intention-behavior gap. We assessed immediate trolling motivation using a self-report measure rather than assessing actual trolling behavior unobtrusively (e.g., by letting people write comments and rating trolling content). Using self-reports rather than unobtrusive approaches has been shown to result in a bias towards socially desirable responses [ 84 , 85 ]. However, it might also be the case that Cyberball was not a sufficient manipulation or that ostracism does not always lead to aggressive or revengeful behavior. A meta-analysis [ 59 ] specifies that rejection does not necessarily have to result in aggression: Following rejection, people act antisocially to satisfy a need for control that could otherwise not be achieved. In our context, participants were excluded by strangers whom the participants would never meet again after a five-minute game. It might be the case that their perceived control was not reduced enough to react in an antisocial way.

Another explanation for our generally non-significant results for social exclusion, as well as the two observed negative effects of social exclusion on trolling motivation, may be due to one exclusion experience not leading to immediate aggression: Short-term social exclusion generally appears to lead to behaviors meant to ameliorate the situation so long as control can be regained [ 86 , 87 ]. In this context, the two negative effects of social exclusion on immediate trolling motivation make sense, as they could be understood as a way to regain affiliative opportunities. In comparison, long-term social exclusion may result in the temporarily aggressive self becoming a person’s actual self [ 88 ]. Hence, we might hypothesize that long-term rather than short-term social exclusion may lead to trolling behavior.

Finally, it is important to note that our assessment of trolling motivation was not person-specific; in other words, we did not ask if participants wanted to troll the people who had just excluded them. Cook and colleagues (2018) [ 11 ] found that revenge is a reason to troll others for self-confessed trolls and that this is a response to others behaving ‘stupidly’ (p. 3332) or being trolled themselves. Based on these points, it might prove valuable to re-examine the effect of exclusion on trolling behavior in a more realistic context where people have the chance to target users who ostracized them.

Third, this study showed that psychopathy and sadism predict immediate trolling motivation, whereas other dark personality facets did not. Moreover, in our robustness analysis, only sadism predicted immediate trolling motivation. These findings are not completely surprising, as other multiple regression analyses also do not find that each of the Dark Tetrad facets explains a significant amount of variation in trolling behavior. For example, in one study [ 5 ], neither Machiavellianism nor narcissism significantly predicted trolling, and the same result was found by Craker and March (2016) [ 22 ] for trolling behavior on Facebook. Thus, it appears that psychopathy and sadism are significant predictors of trolling when all facets of the Dark Tetrad are taken into account. In contrast, Machiavellianism and narcissism probably do not explain any variance in immediate trolling motivation when psychopathy and sadism are controlled for. A recent review indicates (a) that sadism generally motivates trolling more than the remaining Dark Tetrad facets and (b) that the association of sadism with psychopathy is stronger compared to the relationships with narcissism or Machiavellianism [ 32 ]. The same review also suggests that different aggressive behaviors show stronger associations with sadism and psychopathy but not necessarily with narcissism [ 32 ]. The finding that sadism is a better predictor of trolling than the Dark Triad facets [ 32 ] mirrors our robustness quantile regression where, after low loading items were excluded, only sadism predicted trolling motivation.

These patterns may be due to the stronger association between sadism and psychopathy with aggressive behaviors, but it may also be due to the often one-dimensional assessment of the Dark Triad facets [ 78 ]. The Dark Triad research has been criticized because measures often neglect the multidimensionality of the Dark Triad facets [ 78 ]. Investigating subcomponents of the Dark Tetrad facets might have provided more nuanced insights into the relationships between personality and trolling behavior. For example, Vize and colleagues [ 42 ] investigated different facets of narcissism and found that grandiose narcissism was more important in explaining proactive aggression whereas vulnerable narcissism was more important for reactive aggression. In the context of our experiment which manipulated social exclusion, a differentiation between grandiose and vulnerable narcissism might have aided our understanding.

The concept of the Dark Tetrad has received further criticism: Some researchers suggest that narcissism and Machiavellianism are features of psychopathy and that, thus, the Dark Triad does not explain variance beyond psychopathy [ 89 ]. Finally, humor research has indicated that psychopathy outperforms the other facets of the Dark Tetrad in explaining aggressive humor and katagelasticism [ 54 ] which may instigate trolling behavior [ 49 ]. Sadism also uniquely predicts katagelasticism although to a lesser degree than psychopathy [ 54 ]. In contrast, narcissism is associated with lighter forms of humor that enable relationship-building while Machiavellianism is strongly associated with the use of irony and the fear of being laughed at [ 54 ].

As such, several potential reasons for our findings arise: (a) Psychopathy and sadism are stronger predictors of aggressive behavior, (b) we did not assess the multidimensionality of narcissism, Machiavellianism, and psychopathy, (c) effects of narcissism and Machiavellianism may already be explained by including psychopathy due to an overlap in definitions, and (d) psychopathy and sadism appear to have stronger associations with katagelasticism than narcissism and Machiavellianism. Finally, sadism may outperform psychopathy in our robustness analysis, since people with high sadistic tendencies feel greater aggressive pleasure which may motivate people to behave aggressively [ 39 ], here: motivation to troll others. This intrinsic enjoyment of inflicting pain is not a crucial component of psychopathy [ 90 ].

Fourth, our quantile regression showed that even psychopathy and sadism have no explanatory power for the lower quantiles of immediate trolling motivation. However, for higher quantiles of immediate trolling motivation (i.e., people who were in the higher percentiles of immediate trolling motivation), psychopathy and sadism become stronger predictors. This finding highlights the importance of quantile regression in personality research and suggests that even psychopathy and sadism may not be suitable predictors for no or minimal trolling behavior.

Lastly, this study found significant associations between trolling and aggressive as well as self-defeating humor, confirming the recent findings of Navarro-Carrillo and colleagues (2021). The positive relationship between aggressive humor and trolling behavior is in line with prior research showing links between aggressive humor and at least some dark personality traits [ 55 , 56 ]. Our finding also fits with Hardaker’s (2010) [ 1 ] definition of trolls, which states that they create conflict for their own amusement.

The association between trolling and self-defeating humor might appear less intuitive. Despite this surface-level contradiction, there are potential explanations for this finding: Aggressive humor has been shown to correlate with self-defeating humor. In Martin and colleagues’ (2003) [ 46 ] study, this relationship was significant for men and women. Moreover, despite a troll’s egocentric tendencies, they may still lack self-confidence [ 48 ].

4.1. Limitations and further research

This study has some limitations: Firstly, since there is no instrument to measure immediate trolling motivation, we used an adapted version of the GAIT [ 8 ]. Because no official and validated German scales of the GAIT and CAST existed, we used our translation of the scales. While we aimed to capture the original meaning of the items and the concepts while maximizing the naturalness of the German formulations, this remains a limitation of the present study. Due to concerns about the scale validity, we ran robustness analyses where we excluded low loading items. This led to partially differing results for our quantile regressions and our dominance analyses. Consequently, we urge researchers to translate and validate scales to allow for more rigorous research across different cultures.

Moreover, assessing immediate trolling behaviors (observable) rather than motivation (self-report measure) could prove more fruitful in future online trolling research, since answering a questionnaire may result in socially desirable responses [ 84 , 85 ] which could make it difficult to find true effects. Additionally, being asked about trolling motivation after an experimental manipulation might hint at the study’s purpose for participants. To investigate online trolling behavior further, more immersive and ecologically valid assessments (as done by Cheng and colleagues (2017) [ 9 ]) should be applied. This could even be done through field experiments in online multiplayer games.

One limitation of this study is that we did not include a manipulation check for the social exclusion manipulation. We did not include a manipulation check to avoid making the purpose of the present study apparent to participants and because a meta-analysis of 120 studies found large (d > 1.4) and generalizable effects for the Cyberball manipulation [ 71 ].

The present study only investigated malicious, serious trolling. However, another, humorous form of trolling behavior also exists [ 20 ], and to the best of our knowledge, this type of trolling has not received much attention in prior personality and humor research. Future studies may want to investigate potential differences in personality facets and humor styles between serious and humorous trolling and whether the same people use both forms of online interaction depending on different situational circumstances.

While our study has some limitations, it also provides some new avenues for further research. We suggest that future trolling research should consider criticisms of the Dark Tetrad [ 78 , 89 ] and propose two strategies in future study designs. First, we suggest that researchers take the multidimensionality of the Dark Tetrad facets into account and select measures which address the complexity of the dark personality traits. This has already been done in part by Paananen and Reichl [ 30 ] when they used verbal, physical, and vicarious sadism, and by March [ 29 ] who differentiated between direct and vicarious sadism as well as between primary and secondary psychopathy. We believe that these approaches can help to provide a more nuanced perspective on trolling behavior. Second, we suggest dominance analysis [ 79 , 80 ] to compare how well components and subcomponents of the Dark Tetrad explain trolling behavior. This analysis has demonstrated, for example, that the narcissism factor interpersonal antagonism explains more variance in aggressive and antisocial behaviors than the facets of extraversion and neuroticism do [ 42 ].

Concerning humor styles, an interesting finding of our study is the positive correlation between global trolling behavior and self-defeating humor. It appears that trolls (online or offline) do not only target others but also themselves, using humor. A direction to approach this further may be to include the role of self-esteem as a predictor of trolling behaviors. Moreover, the research on trolling might want to investigate antecedents of perceived funniness [ 91 ] to better understand how and why trolls attempt their aggressive versions of comedy. This may also prove fruitful in the context of investigating different kinds of trolling [ 20 ].

Finally, we suggest moving beyond correlational studies when investigating trolling research specifically and antisocial social media behavior generally. Correlations between personality facets and trolling behavior do not explain why people choose to troll others when they do. We suggest a thorough examination of (potential) trolling behavior under different situational circumstances to enhance our understanding of internet trolls. This could also help platform providers to create more harmonious communities, e.g., by answering the question of whether certain functions (downvotes, likes, having featured comments, etc.) encourage people to engage in trolling behavior.

5. Conclusion

The present study confirms the correlational association between the Dark Tetrad of personality traits but shows that Machiavellianism and narcissism do not predict immediate trolling motivation when we control for participants’ psychopathy and sadism. Moreover, even psychopathy and sadism are only significant predictors for higher quantiles of immediate trolling motivation in our main analysis. In our robustness analysis, only sadism predicted higher quantiles of immediate trolling motivation. Sadism was also notably more important than psychopathy in our robustness dominance analysis. As such, this study highlights that not all facets of the Dark Tetrad are equally predictive of trolling and that the relationship between the dark personality dimensions and trolling behavior is more nuanced than previously assumed. This also emphasizes the need for quantile regression in personality research.

For some higher quantiles of immediate trolling motivation, we found that a social exclusion experience reduced the motivation to troll others. This highlights the importance of more experimental studies to enhance our understanding of online trolls.

Supporting information

S1 appendix..

https://doi.org/10.1371/journal.pone.0280271.s001

  • View Article
  • Google Scholar
  • 2. Statista. Share of teenage individuals who have been bullied or trolled online in the United Kingdom (UK) as of January 2016, by platform 2016. https://www-statista-com.zu.idm.oclc.org/statistics/547974/experience-of-online-bullying-and-trolling-on-social-media-by-teens-in-the-uk/ .
  • 3. Statista. Cyber bullying. 2020.
  • 4. Kunst A. Opinions on internet trolling in the U.S. 2017. Statista 2019. https://www-statista-com.zu.idm.oclc.org/statistics/380047/agree-disagree-internet-trolling/ .
  • 9. Cheng J, Bernstein M, Danescu-Niculescu-Mizil C, Leskovec J. Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions. CSCW Conf Comput Support Coop Work, ACM; 2017, p. 1217–1230. https://doi.org/10.1145/2998181.2998213 .
  • PubMed/NCBI
  • 13. Kunst A. Internet trolling: malicious actions in online comments sections U.S. 2017. Statista 2019. https://www-statista-com.zu.idm.oclc.org/statistics/380000/internet-trolling-malicious-intent-towards-strangers/ .
  • 52. Barton H. The dark side of the Internet. An Introd. to Cyberpsychology, Routledge; 2016, p. 80–92.
  • 75. Koenker R. Quantile regression 40 years on. 2017. https://doi.org/ http://dx.doi.org/10.1920/wp.cem.2017.3617 .
  • 82. Koenker R. Package ‘quantreg’ 2013.
  • 83. Bustos Navarrete C. dominanceanalysis. 2020 n.d. https://www.rdocumentation.org/packages/dominanceanalysis/versions/2.0.0 .
  • 86. Bernstein MJ. Research in social psychology: Consequences of short-and long-term social exclusion. In: Riva P, Eck J, editors. Soc. exclusion Psychol. Approaches to Underst. Reducing Its Impact, Springer International Publishing; 2016, p. 51–72. https://doi.org/10.1007/978-3-319-33033-4 .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • The Future of Free Speech, Trolls, Anonymity and Fake News Online

Many experts fear uncivil and manipulative behaviors on the internet will persist – and may get worse. This will lead to a splintering of social media into AI-patrolled and regulated ‘safe spaces’ separated from free-for-all zones. Some worry this will hurt the open exchange of ideas and compromise privacy

Table of contents.

  • About this canvassing of experts
  • Theme 1: Things will stay bad, Part I
  • Theme 2: Things will stay bad, Part II
  • Theme 3: Things will get better
  • Theme 4: Oversight and community moderation come with a cost
  • A few closing general observations and predictions
  • Acknowledgments

essay about internet trolls

One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long. Bailey Poland

Since the early 2000s, the wider diffusion of the network, the dawn of Web 2.0 and social media’s increasingly influential impacts, and the maturation of strategic uses of online platforms to influence the public for economic and political gain have altered discourse. In recent years, prominent internet analysts and the public at large have expressed increasing concerns that the content, tone and intent of online interactions have undergone an evolution that threatens its future and theirs. Events and discussions unfolding over the past year highlight the struggles ahead. Among them:

  • Respected internet pundit John Naughton asked in The Guardian , “Has the internet become a failed state?” and mostly answered in the affirmative.
  • The U.S. Senate heard testimony on the increasingly effective use of social media for the advancement of extremist causes, and there was growing attention to how social media are becoming weaponized by terrorists, creating newly effective kinds of propaganda .
  • Scholars provided evidence showing that social bots were implemented in acts aimed at disrupting the 2016 U.S. presidential election . And news organizations documented how foreign trolls bombarded U.S. social media with fake news . A December 2016 Pew Research Center study found that about two-in-three U.S. adults (64%) say fabricated news stories cause a great deal of confusion about the basic facts of current issues and events.
  • A May 2016 Pew Research Center report showed that 62% of Americans get their news from social media . Farhad Manjoo of The New York Times argued that the “internet is loosening our grip on the truth. ” And his colleague Thomas B. Edsall curated a lengthy list of scholarly articles after the election that painted a picture of how the internet was jeopardizing democracy.
  • 2016 was the first year that an internet meme made its way into the Anti-Defamation League’s database of hate symbols .
  • Time magazine devoted a 2016 cover story to explaining “why we’re losing the internet to the culture of hate .”
  • Celebrity social media mobbing intensified. One example: “Ghostbusters” actor and Saturday Night Live cast member Leslie Jones was publicly harassed on Twitter and had her personal website hacked .
  • An industry report revealed how former Facebook workers suppressed conservative news content .
  • Multiple news stories indicated that state actors and governments increased their efforts to monitor users of instant messaging and social media
  • The Center on the Future of War started the Weaponized Narrative Initiative .
  • Many experts documented the ways in which “fake news” and online harassment might be more than social media “byproducts” because they help to drive revenue.
  • #Pizzagate, a case study , revealed how disparate sets of rumors can combine to shape public discourse and, at times, potentially lead to dangerous behavior.
  • Scientific American carried a nine-author analysis of the influencing of discourse by artificial intelligence (AI) tools, noting, “We are being remotely controlled ever more successfully in this manner. … The trend goes from programming computers to programming people … a sort of digital scepter that allows one to govern the masses efficiently without having to involve citizens in democratic processes.”
  • Google (with its Perspective API ), Twitter and Facebook are experimenting with new ways to filter out or label negative or misleading discourse.
  • Researchers are exploring why people troll .
  • And a drumbeat of stories out of Europe covered how governments are attempting to curb fake news and hate speech but struggling to reconcile their concerns with sweeping free speech rules that apply in America.

To illuminate current attitudes about the potential impacts of online social interaction over the next decade, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate practitioners and government leaders. Some 1,537 responded to this effort between July 1 and Aug. 12, 2016 (prior to the late-2016 revelations about potential manipulation of public opinion via hacking of social media). They were asked:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?

In response to this question, 42% of respondents indicated that they expect “no major change”  in online social climate in the coming decade and 39% said they expect the online future will be “more shaped” by negative activities . Those who said they expect the internet to be “less shaped” by harassment, trolling and distrust were in the minority. Some 19% said this . Respondents were asked to elaborate on how they anticipate online interaction progressing over the next decade. (See “About this canvassing of experts” for further details about the limits of this sample.)

Participants were also asked to explain their answers in a written elaboration and asked to consider the following prompts: 1) How do you expect social media and digital commentary will evolve in the coming decade? 2) Do you think we will see a widespread demand for technological systems or solutions that encourage more inclusive online interactions? 3) What do you think will happen to free speech? And 4) What might be the consequences for anonymity and privacy?

While respondents expressed a range of opinions from deep concern to disappointment to resignation to optimism, most agreed that people – at their best and their worst – are empowered by networked communication technologies. Some said the flame wars and strategic manipulation of the zeitgeist might just be getting started if technological and human solutions are not put in place to bolster diverse civil discourse.

A number of respondents predicted online reputation systems and much better security and moderation solutions will become near ubiquitous in the future, making it increasingly difficult for “bad actors” to act out disruptively. Some expressed concerns that such systems – especially those that remove the ability to participate anonymously online – will result in an altered power dynamic between government/state-level actors, the elites and “regular” citizens.

Anonymity, a key affordance of the early internet, is an element that many in this canvassing attributed to enabling bad behavior and facilitating “uncivil discourse” in shared online spaces. The purging of user anonymity is seen as possibly leading to a more inclusive online environment and also setting the stage for governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate.

essay about internet trolls

Most experts predicted that the builders of open social spaces on global communications networks will find it difficult to support positive change in “cleaning up” the real-time exchange of information and sharing of diverse ideologies over the next decade, as millions more people around the world become connected for the first time and among the billions already online are many who compete in an arms race of sorts to hack and subvert corrective systems.

Those who believe the problems of trolling and other toxic behaviors can be solved say the cure might also be quite damaging. “One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” explained expert respondent Bailey Poland , author of “Haters: Harassment, Abuse, and Violence Online.”

The majority in this canvassing were sympathetic to those abused or misled in the current online environment while expressing concerns that the most likely solutions will allow governments and big businesses to employ surveillance systems that monitor citizens, suppress free speech and shape discourse via algorithms, allowing those who write the algorithms to sculpt civil debate.

Susan Etlinger , an industry analyst at Altimeter Group, walked through a future scenario of tit-for-tat, action-reaction that ends in what she calls a “Potemkin internet.” She wrote: “In the next several years we will see an increase in the type and volume of bad behavior online, mostly because there will be a corresponding increase in digital activity. … Cyberattacks, doxing, and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view. The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor. Of course, this is already happening, just out of sight of most of us. The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

One other point of context for this non-representative sample of a particular population: While the question we posed was not necessarily aimed at getting people’s views about the role of political material in online social spaces, it inevitably drew commentary along those lines because this survey was fielded in the midst of a bitter, intense election in the United States where one of the candidates, in particular, was a provocative user of Twitter.

Most participants in this canvassing wrote detailed elaborations explaining their positions. Their well-considered comments provide insights about hopeful and concerning trends. They were allowed to respond anonymously, and many chose to do so.

These findings do not represent all points of view possible, but they do reveal a wide range of striking observations. Respondents collectively articulated four “key themes” that are introduced and briefly explained below and then expanded upon in more-detailed sections .

The following section presents a brief overview of the most evident themes extracted from the written responses, including a small selection of representative quotes supporting each point. Some responses are lightly edited for style or due to length.

Theme 1: Things will stay bad because to troll is human; anonymity abets anti-social behavior; inequities drive at least some of the inflammatory dialogue; and the growing scale and complexity of internet discourse makes this difficult to defeat

While some respondents saw issues with uncivil behavior online on somewhat of a plateau at the time of this canvassing in the summer of 2016 and a few expect solutions will cut hate speech, misinformation and manipulation, the vast majority shared at least some concerns that things could get worse, thus two of the four overarching themes of this report start with the phrase, “Things will stay bad.”

The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution. Anonymous software engineer

A number of expert respondents observed that negative online discourse is just the latest example of the many ways humans have exercised social vitriol for millennia. Jerry Michalski , founder at REX, wrote, “I would very much love to believe that discourse will improve over the next decade, but I fear the forces making it worse haven’t played out at all yet. After all, it took us almost 70 years to mandate seatbelts. And we’re not uniformly wise about how to conduct dependable online conversations, never mind debates on difficult subjects. In that long arc of history that bends toward justice, particularly given our accelerated times, I do think we figure this out. But not within the decade.”

Vint Cerf , Internet Hall of Fame member, Google vice president and co-inventor of the Internet Protocol, summarized some of the harmful effects of disruptive discourse:

“The internet is threatened with fragmentation,” he wrote. “… People feel free to make unsupported claims, assertions, and accusations in online media. … As things now stand, people are attracted to forums that align with their thinking, leading to an echo effect. This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it. … It is hard to see where this phenomenon may be heading. … Social media bring every bad event to our attention, making us feel as if they all happened in our back yards – leading to an overall sense of unease. The combination of bias-reinforcing enclaves and global access to bad actions seems like a toxic mix. It is not clear whether there is a way to counter-balance their socially harmful effects.”

Subtheme: Trolls have been with us since the dawn of time; there will always be some incivility

An anonymous respondent commented, “The tone of discourse online is dictated by fundamental human psychology and will not easily be changed.” This statement reflects the attitude of expert internet technologists, researchers and pundits, most of whom agree that it is the people using the network, not the network, that is the root of the problem.

Paul Jones , clinical professor and director of ibiblio.org at the University of North Carolina, Chapel Hill, commented, “The id unbound from the monitoring and control by the superego is both the originator of communication and the nemesis of understanding and civility.”

John Cato , a senior software engineer, wrote, “Trolling for arguments has been an internet tradition since Usenet. Some services may be able to mitigate the problem slightly by forcing people to use their real identities, but wherever you have anonymity you will have people who are there just to make other people angry.”

And an anonymous software engineer explained why the usual level of human incivility has been magnified by the internet, noting, “The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution.”

Subtheme: Trolling and other destructive behaviors often result because people do not recognize or don’t care about the consequences flowing from their online actions

Michael Kleeman , formerly with the Boston Consulting Group, Arthur D. Little and Sprint, now senior fellow at the Institute on Global Conflict and Cooperation at the University of California, San Diego, explained: “Historically, communities of practice and conversation had other, often physical, linkages that created norms of behavior. And actors would normally be identified, not anonymous. Increased anonymity coupled with an increase in less-than-informed input, with no responsibility by the actors, has tended and will continue to create less open and honest conversations and more one-sided and negative activities.”

Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment. Anonymous respondent

An expert respondent who chose not to be identified commented, “People are snarky and awful online in large part because they can be anonymous.” And another such respondent wrote, “Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment. If Gamergate can harass and dox any woman with an opinion and experience no punishment as a result, how can things get better?”

Anonymously, a professor at Massachusetts Institute of Technology (MIT) commented, “We see a dark current of people who equate free speech with the right to say anything, even hate speech, even speech that does not sync with respected research findings. They find in unmediated technology a place where their opinions can have a multiplier effect, where they become the elites.”

Subtheme: Inequities drive at least some of the inflammatory dialogue

Some leading participants in this canvassing said the tone of discourse will worsen in the next decade due to inequities and prejudice, noting wealth disparity, the hollowing out of the middle class, and homophily (the tendency of people to bond with those similar to themselves and thus also at times to shun those seen as “the other”).

Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities. Axel Bruns

Cory Doctorow , writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, offered a bleak assessment, writing, “ Thomas Piketty, etc ., have correctly predicted that we are in an era of greater social instability created by greater wealth disparity which can only be solved through either the wealthy collectively opting for a redistributive solution (which feels unlikely) or everyone else compelling redistribution (which feels messy, unstable, and potentially violent). The internet is the natural battleground for whatever breaking point we reach to play out, and it’s also a useful surveillance, control, and propaganda tool for monied people hoping to forestall a redistributive future. The Chinese internet playbook – the 50c army, masses of astroturfers, libel campaigns against ‘enemies of the state,’ paranoid war-on-terror rhetoric – has become the playbook of all states, to some extent (see, e.g., the HB Gary leak that revealed U.S. Air Force was putting out procurement tenders for ‘persona management’ software that allowed their operatives to control up to 20 distinct online identities, each). That will create even more inflammatory dialogue, flamewars, polarized debates, etc.”

And an anonymous professor at MIT remarked, “Traditional elites have lost their credibility because they have become associated with income inequality and social injustice. … This dynamic has to shift before online life can play a livelier part in the life of the polity. I believe that it will, but slowly.”

Axel Bruns , a professor at the Queensland University of Technology’s Digital Media Research Centre, said, “Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities.”

Subtheme: The ever-expanding scale of internet discourse and its accelerating complexity make it difficult to deal with problematic content and contributors

As billions more people are connected online and technologies such as AI chatbots, the Internet of Things, and virtual and augmented reality continue to mature, complexity is always on the rise. Some respondents said well-intentioned attempts to raise the level of discourse are less likely to succeed in a rapidly changing and widening information environment.

As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior. Jessica Vitak

Matt Hamblen , senior editor at Computerworld, commented, “[By 2026] social media and other forms of discourse will include all kinds of actors who had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more.”

An anonymous respondent wrote, “Bad actors will have means to do more, and more significant bad actors will be automated as bots are funded in extra-statial ways to do more damage – because people are profiting from this.”

Jessica Vitak , an assistant professor at the University of Maryland, commented, “Social media’s affordances, including increased visibility and persistence of content, amplify the volume of negative commentary. As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior.”

Bryan Alexander , president of Bryan Alexander Consulting, added, “The number of venues will rise with the expansion of the Internet of Things and when consumer-production tools become available for virtual and mixed reality.”

Theme 2: Things will stay bad because tangible and intangible economic and political incentives support trolling. Participation = power and profits

Many respondents said power dynamics push trolling along. The business model of social media platforms is driven by advertising revenues generated by engaged platform users. The more raucous and incendiary the material, at times, the more income a site generates. The more contentious a political conflict is, the more likely it is to be an attention getter. Online forums lend themselves to ever-more hostile arguments. 1

Subtheme: ‘Hate, anxiety, and anger drive participation,’ which equals profits and power, so online social platforms and mainstream media support and even promote uncivil acts

Frank Pasquale , professor of law at the University of Maryland and author of “Black Box Society,” commented, “The major internet platforms are driven by a profit motive. Very often, hate, anxiety and anger drive participation with the platform. Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

It’s a brawl, a forum for rage and outrage. … The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. Andrew Nachison

Kate Crawford , a well-known internet researcher studying how people engage with networked technologies, observed, “Distrust and trolling is happening at the highest levels of political debate, and the lowest. The Overton Window has been widened considerably by the 2016 U.S. presidential campaign, and not in a good way. We have heard presidential candidates speak of banning Muslims from entering the country, asking foreign powers to hack former White House officials, retweeting neo-Nazis. Trolling is a mainstream form of political discourse.”

Andrew Nachison , founder at We Media, said, “It’s a brawl, a forum for rage and outrage. It’s also dominated social media platforms on the one hand and content producers on the other that collude and optimize for quantity over quality. Facebook adjusts its algorithm to provide a kind of quality – relevance for individuals. But that’s really a ruse to optimize for quantity. The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. I don’t know that prevalence of harassment and ‘bad actors’ will change – it’s already bad – but if the overall tone is lousy, if the culture tilts negative, if political leaders popularize hate, then there’s good reason to think all of that will dominate the digital debate as well.”

Subtheme: Technology companies have little incentive to rein in uncivil discourse, and traditional news organizations – which used to shape discussions – have shrunk in importance

Several of the expert respondents said because algorithmic solutions tend “to reward that which keeps us agitated,” it is especially damaging that the pre-internet news organizations that once employed fairly objective and well-trained (if not well-paid) armies of arbiters as democratic shapers of the defining climate of social and political discourse have fallen out of favor, replaced by creators of clickbait headlines read and shared by short-attention-span social sharers.

It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. David Durant

David Clark , a senior research scientist at MIT and Internet Hall of Famer commented that he worries over the loss of character in the internet community. “It is possible, with attention to the details of design that lead to good social behavior, to produce applications that better regulate negative behavior,” he wrote. “However, it is not clear what actor has the motivation to design and introduce such tools. The application space on the internet today is shaped by large commercial actors, and their goals are profit-seeking, not the creation of a better commons. I do not see tools for public discourse being good ‘money makers,’ so we are coming to a fork in the road – either a new class of actor emerges with a different set of motivations, one that is prepared to build and sustain a new generation of tools, or I fear the overall character of discourse will decline.”

An anonymous principal security consultant wrote, “As long as success – and in the current climate, profit as a common proxy for success – is determined by metrics that can be easily improved by throwing users under the bus, places that run public areas online will continue to do just that.”

Steven Waldman , founder and CEO of LifePosts, said, “It certainly sounds noble to say the internet has democratized public opinion. But it’s now clear: It has given voice to those who had been voiceless because they were oppressed minorities and to those who were voiceless because they are crackpots. … It may not necessarily be ‘bad actors’ – i.e., racists, misogynists, etc. – who win the day, but I do fear it will be the more strident. I suspect there will be ventures geared toward counter-programming against this, since many people are uncomfortable with it. But venture-backed tech companies have a huge bias toward algorithmic solutions that have tended to reward that which keeps us agitated. Very few media companies now have staff dedicated to guiding conversations online.”

John Anderson , director of journalism and media studies at Brooklyn College, wrote, “The continuing diminution of what Cass Sunstein once called ‘general-interest intermediaries’ such as newspapers, network television, etc. means we have reached a point in our society where wildly different versions of ‘reality’ can be chosen and customized by people to fit their existing ideological and other biases. In such an environment there is little hope for collaborative dialogue and consensus.”

David Durant , a business analyst at U.K. Government Digital Service, argued, “It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. While this trend continues, the ability for serious civilized conversations about many topics will remain very hard to achieve.”

Subtheme: Terrorists and other political actors are benefiting from the weaponization of online narratives by implementing human- and bot-based misinformation and persuasion tactics

The weaponization of social media and “capture” of online belief systems, also known as “narratives,” emerged from obscurity in 2016 due to the perceived impact of social media uses by terror organizations and political factions. Accusations of Russian influence via social media on the U.S. presidential election brought to public view the ways in which strategists of all stripes are endeavoring to influence people through the sharing of often false or misleading stories, photos and videos. “Fake news” moved to the forefront of ongoing discussions about the displacement of traditional media by social platforms. Earlier, in the summer of 2016, participants in this canvassing submitted concerns about misinformation in online discourse creating distorted views.

There’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls. Anonymous respondent

Anonymously, a futurist, writer, and author at Wired , explained, “New levels of ‘cyberspace sovereignty’ and heavy-duty state and non-state actors are involved; there’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.”

Karen Blackmore , a lecturer in IT at the University of Newcastle, wrote, “Misinformation and anti-social networking are degrading our ability to debate and engage in online discourse. When opinions based on misinformation are given the same weight as those of experts and propelled to create online activity, we tread a dangerous path. Online social behaviour, without community-imposed guidelines, is subject to many potentially negative forces. In particular, social online communities such as Facebook also function as marketing tools, where sensationalism is widely employed and community members who view this dialogue as their news source gain a very distorted view of current events and community views on issues. This is exacerbated with social network and search engine algorithms effectively sorting what people see to reinforce worldviews.”

Laurent Sch ü pbach , a neuropsychologist at University Hospital in Zurich, focused his entire response about negative tone online on burgeoning acts of economic and political manipulation, writing, “The reason it will probably get worse is that companies and governments are starting to realise that they can influence people’s opinions that way. And these entities sure know how to circumvent any protection in place. Russian troll armies are a good example of something that will become more and more common in the future.”

David Wuertele , a software engineer at Tesla Motors, commented, “Unfortunately, most people are easily manipulated by fear. … Negative activities on the internet will exploit those fears, and disproportionate responses will also attempt to exploit those fears. Soon, everyone will have to take off their shoes and endure a cavity search before boarding the internet.”

Theme 3: Things will get better because technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI)

Most respondents said it is likely that the coming decade will see a widespread move to more-secure services, applications, and platforms and more robust user-identification policies. Some said people born into the social media age will adapt. Some predict that more online systems will require clear identification of participants. This means that the online social forums could splinter into various formats, some of which are highly protected and monitored and others which could retain the free-for-all character of today’s platforms.

Subtheme: AI sentiment analysis and other tools will detect inappropriate behavior and many trolls will be caught in the filter; human oversight by moderators might catch others

Some experts in this canvassing say progress is already being made on some fronts toward better technological and human solutions.

The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable. David Karger

Galen Hunt , a research manager at Microsoft Research NExT, replied, “As language-processing technology develops, technology will help us identify and remove bad actors, harassment, and trolls from accredited public discourse.”

Stowe Boyd , chief researcher at Gigaom, observed, “I anticipate that AIs will be developed that will rapidly decrease the impact of trolls. Free speech will remain possible, although AI filtering will make a major dent on how views are expressed, and hate speech will be blocked.”

Marina Gorbis , executive director at the Institute for the Future, added, “I expect we will develop more social bots and algorithmic filters that would weed out the some of the trolls and hateful speech. I expect we will create bots that would promote beneficial connections and potentially insert context-specific data/facts/stories that would benefit more positive discourse. Of course, any filters and algorithms will create issues around what is being filtered out and what values are embedded in algorithms.”

Jean Russell of Thrivable Futures wrote, “First, conversations can have better containers that filter for real people who consistently act with decency. Second, software is getting better and more nuanced in sentiment analysis, making it easier for software to augment our filtering out of trolls. Third, we are at peak identity crisis and a new wave of people want to cross the gap in dialogue to connect with others before the consequences of being tribal get worse (Brexit, Trump, etc.).”

David Karger , a professor of computer science at MIT, said, “My own research group is exploring several novel directions in digital commentary. In the not too distant future all this work will yield results. Trolling, doxxing, echo chambers, click-bait, and other problems can be solved. We will be able to ascribe sources and track provenance in order to increase the accuracy and trustworthiness of information online. We will create tools that increase people’s awareness of opinions differing from their own and support conversations with and learning from people who hold those opinions. … The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable (trolls will be able to say what they want, but few will be listening).”

Subtheme: There will be partitioning, exclusion and division of online outlets, social platforms and open spaces

Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. Lindsay Kenzig

Facebook, Twitter, Instagram, Google and other platform providers already “shape” and thus limit what the public views via the implementation of algorithms. As people have become disenchanted with uncivil discourse “open” platforms they stop using them or close their accounts, sometimes moving to smaller online communities of people with similar needs or ideologies. Some experts expect that these trends will continue and even more partitions, divisions and exclusions may emerge as measures are taken to clean things up. For instance, it is expected that the capabilities of AI-based bots dispatched to assist with information sorting, security, and regulation of the tone and content of discourse will continue to be refined.

Lindsay Kenzig , a senior design researcher, said, “Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now.”

Valerie Bock , of VCB Consulting, commented, “Spaces where people must post under their real names and where they interact with people with whom they have multiple bonds regularly have a higher quality of discourse. … In response to this reality, we’ll see some consolidation as it becomes easier to shape commercial interactive spaces to the desired audience. There will be free-for-all spaces and more-tightly-moderated walled gardens, depending on the sponsor’s strategic goals. There will also be private spaces maintained by individuals and groups for specific purposes.”

Lisa Heinz , a doctoral student at Ohio University, commented, “Humanity’s reaction to negative forces will likely contribute more to the ever-narrowing filter bubble , which will continue to create an online environment that lacks inclusivity by its exclusion of opposing viewpoints. An increased demand for systemic internet-based AI will create bots that will begin to interact – as proxies for the humans that train them – with humans online in real-time and with what would be recognized as conversational language, not the word-parroting bot behavior we see on Twitter now. … When this happens, we will see bots become part of the filter bubble phenomenon as a sort of mental bodyguard that prevents an intrusion of people and conversations to which individuals want no part. The unfortunate aspect of this iteration of the filter bubble means that while free speech itself will not be affected, people will project their voices into the chasm, but few will hear them.”

Bob Frankston , internet pioneer and software innovator, wrote, “I see negative activities having an effect but the effect will likely be from communities that shield themselves from the larger world. We’re still working out how to form and scale communities.”

The expert comments in response to this canvassing were recorded in the summer of 2016; by early 2017, after many events (Brexit, the U.S. election, others mentioned earlier in this report) surfaced concerns about civil discourse, misinformation and impacts on democracy, an acceleration of activity tied to solutions emerged. Facebook, Twitter and Google announced some new efforts toward technological approaches; many conversations about creating new methods of support for public affairs journalism began to be undertaken; and consumer bubble-busting tools including “Outside Your Bubble” and “Escape Your Bubble” were introduced.

Subtheme: Trolls and other actors will fight back, innovating around any barriers they face

Some participants in this canvassing said they expect the already-existing continuous arms race dynamic will expand, as some people create and apply new measures to ride herd over online discourse while others constantly endeavor to thwart them.

Cathy Davidson , founding director of the Futures Initiative at the Graduate Center of the City University of New York, said, “We’re in a spy vs. spy internet world where the faster that hackers and trolls attack, the faster companies (Mozilla, thank you!) plus for-profits come up with ways to protect against them and then the hackers develop new strategies against those protections, and so it goes. I don’t see that ending. … I would not be surprised at more publicity in the future, as a form of cyber-terror. That’s different from trolls, more geo-politically orchestrated to force a national or multinational response. That is terrifying if we do not have sound, smart, calm leadership.”

Sam Anderson , coordinator of instructional design at the University of Massachusetts, Amherst, said, “It will be an arms race between companies and communities that begin to realize (as some online games companies like Riot have) that toxic online communities will lower their long-term viability and potential for growth. This will war with incentives for short-term gains that can arise out of bursts of angry or sectarian activity (Twitter’s character limit inhibits nuance, which increases reaction and response).”

Theme 4: Oversight and community moderation come with a cost. Some solutions could further change the nature of the internet because surveillance will rise; the state may regulate debate; and these changes will polarize people and limit access to information and free speech

A share of respondents said greater regulation of speech and technological solutions to curb harassment and trolling will result in more surveillance, censorship and cloistered communities. They worry this will change people’s sharing behaviors online, limit exposure to diverse ideas and challenge freedom.

Subtheme: Surveillance will become even more prevalent

While several respondents indicated that there is no longer a chance of anonymity online, many say privacy and choice are still options, and they should be protected.

Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy. Richard Stallman

Longtime internet civil libertarian Richard Stallman , Internet Hall of Fame member and president of the Free Software Foundation, spoke to this fear. He predicted, “Surveillance and censorship will become more systematic, even in supposedly free countries such as the U.S. Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy.”

Rebecca MacKinnon , director of Ranking Digital Rights at New America, wrote, “I’m very concerned about the future of free speech given current trends. The demands for governments and companies to censor and monitor internet users are coming from an increasingly diverse set of actors with very legitimate concerns about safety and security, as well as concerns about whether civil discourse is becoming so poisoned as to make rational governance based on actual facts impossible. I’m increasingly inclined to think that the solutions, if they ever come about, will be human/social/political/cultural and not technical.”

James Kalin of Virtually Green wrote, “Surveillance capitalism is increasingly grabbing and mining data on everything that anyone says, does, or buys online. The growing use of machine learning processing of the data will drive ever more subtle and pervasive manipulation of our purchasing, politics, cultural attributes, and general behavior. On top of this, the data is being stolen routinely by bad actors who will also be using machine learning processing to steal or destroy things we value as individuals: our identities, privacy, money, reputations, property, elections, you name it. I see a backlash brewing, with people abandoning public forums and social network sites in favor of intensely private ‘black’ forums and networks.”

Subtheme: Dealing with hostile behavior and addressing violence and hate speech will become the responsibility of the state instead of the platform or service providers

A number of respondents said they expect governments or other authorities will begin implementing regulation or other reforms to address these issues, most indicating that the competitive instincts of platform providers do not work in favor of the implementation of appropriate remedies without some incentive.

My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. Paula Hooper Mayhew

Michael Rogers , author and futurist at Practical Futurist, predicted governments will assume control over identifying internet users. He observed, “I expect there will be a move toward firm identities – even legal identities issued by nations – for most users of the Web. There will as a result be public discussion forums in which it is impossible to be anonymous. There would still be anonymity available, just as there is in the real world today. But there would be online activities in which anonymity was not permitted. Clearly this could have negative free-speech impacts in totalitarian countries but, again, there would still be alternatives for anonymity.”

Paula Hooper Mayhew , a professor of humanities at Fairleigh Dickinson University, commented, “My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. This response may work against support of free speech in the U.S.”

Marc Rotenberg , executive director of the Electronic Privacy Information Center (EPIC), wrote, “The regulation of online communications is a natural response to the identification of real problems, the maturing of the industry, and the increasing expertise of government regulators.”

Subtheme: Polarization will occur due to the compartmentalization of ideologies

John Markoff , senior writer at The New York Times, commented, “There is growing evidence that that the Net is a polarizing force in the world. I don’t believe to completely understand the dynamic, but my surmise is that it is actually building more walls than it is tearing down.”

Marcus Foth , a professor at Queensland University of Technology, said, “Public discourse online will become less shaped by bad actors … because the majority of interactions will take place inside walled gardens. … Social media platforms hosted by corporations such as Facebook and Twitter use algorithms to filter, select, and curate content. With less anonymity and less diversity, the two biggest problems of the Web 1.0 era have been solved from a commercial perspective: fewer trolls who can hide behind anonymity. Yet, what are we losing in the process? Algorithmic culture creates filter bubbles , which risk an opinion polarisation inside echo chambers.”

Emily Shaw , a U.S. civic technologies researcher for mySociety, predicted, “Since social networks … are the most likely future direction for public discourse, a million (self)-walled gardens are more likely to be the outcome than is an increase in hostility, because that’s what’s more commercially profitable.”

Subtheme: Increased monitoring, regulation and enforcement will shape content to such an extent that the public will not gain access to important information and possibly lose free speech

Experts predict increased oversight and surveillance, left unchecked, could lead to dominant institutions and actors using their power to suppress alternative news sources, censor ideas, track individuals, and selectively block network access. This, in turn, could mean publics might never know what they are missing out on, since information will be filtered, removed, or concealed.

The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Thorlaug Agustsdottir

Thorlaug Agustsdottir of Iceland’s Pirate Party, said, “Monitoring is and will be a massive problem, with increased government control and abuse. The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Anonymity is a myth, it only exists for end-users who lack lookup resources.”

Joe McNamee , executive director at European Digital Rights, said, “In the context of a political environment where deregulation has reached the status of ideology, it is easy for governments to demand that social media companies do ‘more’ to regulate everything that happens online. We see this with the European Union’s ‘code of conduct’ with social media companies. This privatisation of regulation of free speech (in a context of huge, disproportionate, asymmetrical power due to the data stored and the financial reserves of such companies) raises existential questions for the functioning of healthy democracies.”

Randy Bush , Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day.”

Dan York , senior content strategist at the Internet Society, wrote, “Unfortunately, we are in for a period where the negative activities may outshine the positive activities until new social norms can develop that push back against the negativity. It is far too easy right now for anyone to launch a large-scale public negative attack on someone through social media and other channels – and often to do so anonymously (or hiding behind bogus names). This then can be picked up by others and spread. The ‘mob mentality’ can be easily fed, and there is little fact-checking or source-checking these days before people spread information and links through social media. I think this will cause some governments to want to step in to protect citizens and thereby potentially endanger both free speech and privacy.”

Responses from other key experts regarding the future of online social climate

This section features responses by several more of the many top analysts who participated in this canvassing. Following this wide-ranging set of comments on the topic will be a much-more expansive set of quotations directly tied to the set of four themes.

‘We’ll see more bad before good because the governing culture is weak and will remain so’

Baratunde Thurston , a director’s fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, replied, “To quote everyone ever, things will get worse before they get better. We’ve built a system in which access and connectivity are easy, the cost of publishing is near zero, and accountability and consequences for bad action are difficult to impose or toothless when they do. Plus consider that more people are getting online everyday with no norm-setting for their behavior and the systems that prevail now reward attention grabbing and extended time online. They reward emotional investment whether positive or negative. They reward conflict. So we’ll see more bad before more good because the governing culture is weak and will remain so while the financial models backing these platforms remain largely ad-based and rapid/scaled user growth-centric.”

‘We should reach ‘peak troll’ before long but there are concerns for free speech’

Brad Templeton , one of the early luminaries of Usenet and longtime Electronic Frontier Foundation board member, currently chair for computing at Singularity University, commented, “Now that everybody knows about this problem I expect active technological efforts to reduce the efforts of the trolls, and we should reach ‘peak troll’ before long. There are concerns for free speech. My hope is that pseudonymous reputation systems might protect privacy while doing this.”

‘People will find it tougher to avoid accountability’

Esther Dyson , founder of EDventure Holdings and technology entrepreneur, writer, and influencer, wrote: “Things will get somewhat better because people will find it tougher to avoid accountability. Reputations will follow you more than they do now. … There will also be clever services like CivilComments.com (disclosure: I’m an investor) that foster crowdsourced moderation rather than censorship of comments. That approach, whether by CivilComments or future competitors, will help. (So would sender-pays, recipient-charges email, a business I would *like* to invest in!) Nonetheless, anonymity is an important right – and freedom of speech with impunity (except for actual harm, yada yada) – is similarly important. Anonymity should be discouraged in general, but it is necessary in regimes or cultures or simply situations where the truth is avoided and truth-speakers are punished.”

Chatbots can help, but we need to make sure they don’t encode hate

Amy Webb , futurist and CEO at the Future Today Institute, said, “Right now, many technology-focused companies are working on ‘conversational computing,’ and the goal is to create a seamless interface between humans and machines. If you have [a] young child, she can be expected to talk to – rather than type on – machines for the rest of her life. In the coming decade, you will have more and more conversations with operating systems, and especially with chatbots, which are programmed to listen to, learn from and react to us. You will encounter bots first throughout social media, and during the next decade, they will become pervasive digital assistants helping you on many of the systems you use. Currently, there is no case law governing the free speech of a chatbot. During the 2016 election cycle, there were numerous examples of bots being used for political purposes. For example, there were thousands of bots created to mimic Latino/Latina voters supporting Donald Trump . If someone tweeted a disparaging remark about Trump and Latinos, bots that looked and sounded like members of the Latino community would target that person with tweets supporting Trump. Right now, many of the chatbots we interact with on social media and various websites aren’t so smart. But with improvements in artificial intelligence and machine learning, that will change. Without a dramatic change in how training databases are built and how our bots are programmed, we will realize a decade from now that we inadvertently encoded structural racism, homophobia, sexism and xenophobia into the bots helping to power our everyday lives. When chatbots start running amok – targeting individuals with hate speech – how will we define ‘speech’? At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

A trend toward decentralization and distributed problem solving will improve things

Doc Searls , journalist, speaker, and director of Project VRM at Harvard University’s Berkman Center for Internet and Society, wrote: “Harassment, trolling … these things thrive with distance, which favors the reptile brains in us all, making bad acting more possible and common. … Let’s face it, objectifying, vilifying, fearing, and fighting The Other has always been a problem for our species. … The internet we share today was only born on 30 April 1995, when the last backbone that forbade commercial activity stood down. Since then we have barely begun to understand, much less civilize, this new place without space. … I believe we are at the far end of this swing toward centralization on the Net. As individuals and distributed solutions to problems (e.g., blockchain [a digital ledger in which transactions are recorded chronologically and publicly]) gain more power and usage, we will see many more distributed solutions to fundamental social and business issues, such as how we treat each other.”

There are designs and tech advances ‘that would help tremendously’

Judith Donath of Harvard University’s Berkman Center, author of “The Social Machine: Designs for Living Online ,” wrote, “With the current practices and interfaces, yes, trolls and bots will dominate online public discourse. But that need not be the case: there are designs and technological advances that would help tremendously. We need systems that support pseudonymity: locally persistent identities. Persistence provides accountability: people are responsible for their words. Locality protects privacy: people can participate in discussions without concern that their government, employer, insurance company, marketers, etc., are listening in (so if they are, they cannot connect the pseudonymous discourse to the actual person). We should have digital portraits that succinctly depict a (possibly pseudonymous) person’s history of interactions and reputation within a community. We need to be able to quickly see who is new, who is well-regarded, what role a person has played in past discussions. A few places do so now (e.g., StackExchange) but their basic charts are far from the goal: intuitive and expressive portrayals. ‘Bad actors’ and trolls (and spammers, harassers, etc.) have no place in most discussions – the tools we need for them are filters; we need to develop better algorithms for detecting destructive actions as defined by the local community. Beyond that, the more socially complex question is how to facilitate constructive discussions among people who disagree. Here, we need to rethink the structure of online discourse. The role of discussion host/moderator is poorly supported by current tech – and many discussions would proceed much better in a model other than the current linear free for all. Our face-to-face interactions have amazing subtlety – we can encourage or dissuade with slight changes in gaze, facial expression, etc. We need to create tools for conversation hosts (think of your role when you post something on your own Facebook page that sparks controversy) that help them to gracefully steer conversations.”

‘Reward systems favor outrage mongering and attention seeking almost exclusively’

Seth Finkelstein , writer and pioneering computer programmer, believes the worst is yet to come: “One of the less-examined aspects of the 2016 U.S. presidential election is that Donald Trump is demonstrating to other politicians how to effectively exploit such an environment. He wasn’t the first to do it, by far. But he’s showing how very high-profile, powerful people can adapt and apply such strategies to social media. Basically, we’re moving out of the ‘early adopter’ phase of online polarization, into making it mainstream. The phrasing of this question conflates two different issues. It uses a framework akin to ‘Will our kingdom be more or less threatened by brigands, theft, monsters, and an overall atmosphere of discontent, strife, and misery?’ The first part leads one to think of malicious motives and thus to attribute the problems of the second part along the lines of outside agitators afflicting peaceful townsfolk. Of course deliberate troublemakers exist. Yet many of the worst excesses come from people who believe in their own minds that they are not bad actors at all, but are fighting a good fight for all which is right and true (indeed, in many cases, both sides of a conflict can believe this, and where you stand depends on where you sit). When reward systems favor outrage mongering and attention seeking almost exclusively, nothing is going to be solved by inveighing against supposed moral degenerates.”

Some bad behavior is ‘pent-up’ speech from those who have been voiceless

Jeff Jarvis , a professor at the City University of New York Graduate School of Journalism, wrote, “I am an optimist with faith in humanity. We will see whether my optimism is misplaced. I believe we are seeing the release of a pressure valve (or perhaps an explosion) of pent-up speech: the ‘masses’ who for so long could not be heard can now speak, revealing their own interests, needs, and frustrations – their own identities distinct from the false media concept of the mass. Yes, it’s starting out ugly. But I hope that we will develop norms around civilized discourse. Oh, yes, there will always be … trolls. What we need is an expectation that it is destructive to civil discourse to encourage them. Yes, it might have seemed fun to watch the show of angry fights. It might seem fun to media to watch institutions like the Republican Party implode. But it soon becomes evident that this is no fun. A desire and demand for civil, intelligent, useful discourse will return; no society or market can live on misinformation and emotion alone. Or that is my hope. How long will this take? It could be years. It could be a generation. It could be, God help us, never.”

Was the idea of ‘reasoned discourse’ ever reasonable?

Mike Roberts , Internet Hall of Fame member and first president and CEO of ICANN, observed, “Most attempts at reasoned discourse on topics interesting to me have been disrupted by trolls in last decade or so. Many individuals faced with this harassment simply withdraw. … There is a somewhat broader question of whether expectations of ‘reasoned’ discourse were ever realistic. History of this, going back to Plato, is one of self-selection into congenial groups. The internet, among other things, has energized a variety of anti-social behaviors by people who get satisfaction from the attendant publicity. My wife’s reaction is ‘why are you surprised?’ in regard to seeing behavior online that already exists offline.”

Our disembodied online identity compels us to ‘ramp up the emotional content’

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., wrote,

“In the next decade a number of factors in public discourse online will continue to converge and vigorously affect each other:

1) Nowness is the ultimate arbiter: The value of our discourse (everything we see or hear) will be weighted by how immediate or instantly seen and communicated the information is. Real-time search, geolocation, just-in-time updates, Twitter, etc., are making of now, the present moment, an all-subsuming reality that tends to bypass anything that isn’t hyper-current.

2) Faceless selfism rocks : With photos and video, we can present ourselves dimensionally, but due to the lack of ‘facework’ in the online sim, our faces are absent or frozen in a framed portrait found elsewhere, and so there is no face-to-face, no dynamic interactivity, no responsive reading to our commentary, except in a follow-up comment. Still, we will get better at using public discourse as self-promotion.

“3) Anonymity changes us : Identity-shielding leads to a different set of ‘manners’ or mannerisms that stem from our sense (not accurate, of course) that online we are anonymous.

4) Context AWOL : Our present ‘ filter failure ,’ to borrow Clay Shirky’s phrase, is almost complete lack of context, reality check, or perspective. In the next decade we will start building better contextual frameworks for information.

5) Volume formula: The volume of content, from all quarters – anyone with a keypad, a device – makes it difficult to manage responses, or even to filter for relevance but tends to favor emotional button-pushing in order to be noticed.

“6) Ersatz us : Online identities will be more made-up, more fictional, but also more malleable than typical ‘facework’ or other human interactions. We can pretend, for a while, to be an ersatz version of ourselves.

7) Any retort in a (tweet) storm : Again, given the lack of ‘facework’ or immediate facial response that defined human response for millennia, we will ramp up the emotional content of messaging to ensure some kind of response, frequently rewarding the brash and outrageous over the slow and thoughtful.”

We will get better at articulating and enforcing helpful norms

David Weinberger , senior researcher at Harvard University’s Berkman Klein Center for Internet & Society, said, “Conversations are always shaped by norms and what the environment enables. For example, seating 100 dinner guests at one long table will shape the conversations differently than putting them at ten tables of ten, or 25 tables of four. The acoustics of the room will shape the conversations. Assigning seats or not will shape the conversations. Even serving wine instead of beer may shape the conversations. The same considerations are even more important on the Net because its global nature means that we have fewer shared norms, and its digital nature means that we have far more room to play with ways of bringing people together. We’re getting much better at nudging conversations into useful interchanges. I believe we will continue to get better at it.”

Anonymity is on its way out, and that will discourage trolling

Patrick Tucker , author of “The Naked Future ” and technology editor at Defense One, said, “Today’s negative online user environment is supported and furthered by two trends that are unlikely to last into the next decade: anonymity in posting and validation from self-identified subgroups. Increasingly, marketers need to better identify and authentication APIs (authentication through Facebook for example) are challenging online anonymity. The passing of anonymity will also shift the cost benefit analysis of writing or posting something to appeal to only a self-identified bully group rather than a broad spectrum of people.”

Polarization breeds incivility and that is reflected in the incivility of online discourse

Alice Marwick , a fellow at Data & Society, commented, “Currently, online discourse is becoming more polarized and thus more extreme, mirroring the overall separation of people with differing viewpoints in the larger U.S. population. Simultaneously, several of the major social media players have been unwilling or slow to take action to curb organized harassment. Finally, the marketplace of online attention encourages so-called ‘clickbait’ articles and sensationalized news items that often contain misinformation or disinformation, or simply lack rigorous fact-checking. Without structural changes in both how social media sites respond to conflict and the economic incentives for spreading inaccurate or sensational information, extremism and therefore conflict will continue. More importantly, the geographical and psychological segmentation of the U.S. population into ‘red’ and ‘blue’ neighborhoods, communities, and states is unlikely to change. It is the latter that gives rise to overall political polarization, which is reflected in the incivility of online discourse.”

‘New variations of digital malfeasance [will] arise’

Jamais Cascio , distinguished fellow at the Institute for the Future, replied, “I don’t expect a significant shift in the tone of online discourse over the next decade. Trolling, harassment, etc., will remain commonplace but not be the overwhelming majority of discourse. We’ll see repeated efforts to clamp down on bad online behavior through both tools and norms; some of these efforts will be (or seem) successful, even as new variations of digital malfeasance arise.”

It will get better and worse

Anil Dash , technologist, wrote, “I expect the negative influences on social media to get worse, and the positive factors to get better. Networks will try to respond to prevent the worst abuses, but new sites and apps will pop up that repeat the same mistakes.”

Sites will ban the ‘unvouched anonymous’; look for the rise of ‘registered pseudonyms’

David Brin , author of “The Transparent Society” and a leader of at the University of California, San Diego’s Arthur C. Clarke Center for Human Imagination, said, “Some company will get rich by offering registered pseudonyms, so that individuals may wander the Web ‘anonymously’ and yet vouched for and accountable for bad behavior. When this happens, almost all legitimate sites will ban the unvouched anonymous.”

Back around 20 B.C., Horace understood these problems

Fred Baker , fellow at Cisco, commented, “Communications in any medium (the internet being but one example) reflects the people communicating. If those people use profane language, are misogynistic, judge people on irrelevant factors such as race, gender, creed, or other such factors in other parts of their lives, they will do so in any medium of communication, including the internet. If that is increasing in prevalence in one medium, I expect that it is or will in any and every medium over time. The issue isn’t the internet; it is the process of breakdown in the social fabric. … If we worry about the youth of our age ‘going to the dogs,’ are we so different from our ancestors? In “Book III of Odes,” circa 20 B.C., Horace wrote: ‘Our sires’ age was worse than our grandsires. We, their sons, are more worthless than they; so in our turn we shall give the world a progeny yet more corrupt.’ I think the human race is not doomed, not today any more than in Horace’s day. But we have the opportunity to choose to lead them to more noble pursuits and more noble discussion of them.”

‘Every node in our networked world is potentially vulnerable’

Mike Liebhold , senior researcher and distinguished fellow at the Institute for the Future, wrote, “After Snowden’s revelations, and in context accelerating cybercrimes and cyberwars, it’s clear that every layer of the technology stack and every node in our networked world is potentially vulnerable. Meanwhile both magnitude and frequency of exploits are accelerating. As a result users will continue to modify their behaviors and internet usage and designers of internet services, systems, and technologies will have to expend growing time and expense on personal and collective security.”

Politicians and companies could engage ‘in an increasing amount of censorship’

Jillian York , director for International Freedom of Expression at the Electronic Frontier Foundation, noted, “The struggle we’re facing is a societal issue we have to address at all levels, and that the structure of social media platforms can exacerbate. Social media companies will need to address this, beyond community policing and algorithmic shaping of our newsfeeds. There are many ways to do this while avoiding censorship; for instance, better-individualized blocking tools and upvote/downvote measures can add nuance to discussions. I worry that if we don’t address the root causes of our current public discourse, politicians and companies will engage in an increasing amount of censorship.”

Sophisticated mathematical equations are having social effects

An anonymous professor at City University of New York , wrote, “I see the space of public discourse as managed in new, more-sophisticated ways, and also in more brutal ones. Thus we have social media management in Mexico courtesy of Peñabots, hacking by groups that are quasi-governmental or serving nationalist interests (one thinks of Eastern Europe). Alexander Kluge once said, ‘The public sphere is the site where struggles are decided by other means than war.’ We are seeing an expanded participation in the public sphere, and that will continue. It doesn’t necessarily mean an expansion of democracy, per se. In fact, a lot of these conflicts are cross-border. In general the discussions will stay ahead of official politics in the sense that there will be increasing options for participation. In a way this suggests new kinds of regionalisms, intriguing at a time when the European Union is taking a hit and trade pacts are undergoing re-examination. This type of participation also means opening up new arenas, e.g., Facebook has been accused of left bias in its algorithm. That means we are acknowledging the role of what are essentially sophisticated mathematical equations as having social effects.”

The flip side of retaining privacy: Pervasive derogatory and ugly comments

Bernardo A. Huberman , senior fellow and director of the Mechanisms and Design Lab at Hewlett Packard Enterprise, said, “Privacy as we tend to think of nowadays is going to be further eroded, if only because of the ease with which one can collect data and identify people. Free speech, if construed as the freedom to say whatever one thinks, will continue to exist and even flourish, but the flip side will be a number of derogatory and ugly comments that will become more pervasive as time goes on.”

Much of ‘public online discourse consists of what we and others don’t see’

Stephen Downes , researcher at National Research Council of Canada, noted, “It’s important to understand that our perception of public discourse is shaped by two major sources: first, our own experience of online public discourse, and second, media reports (sometimes also online) concerning the nature of public discourse. From both sources we have evidence that there is a lot of influence from bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust, as suggested in the question. But a great deal of public online discourse consists of what we and others don’t see.”

How about a movement to teach people to behave?

Marcel Bullinga , trendwatcher and keynote speaker @futurecheck, wrote, “Online we express hate and disgust we would never express offline, face-to-face. It seems that social control is lacking online. We do not confront our neighbours/children/friends with antisocial behaviour. The problem is not [only] anonymous bullying: many bullies have faces and are shameless, and they have communities that encourage bullying. And government subsidies stimulate them – the most frightening aspect of all. We will see the rise of the social robots, technological tools that can help us act as polite, decent social beings (like the REthink app). But more than that we need to go back to teaching and experiencing morals in business and education: back to behaving socially.”

  • A recent Pew Research Center analysis of communications by members of the 114 th Congress found that the public engagement with the social media postings of these lawmakers was most intense when the citations were negative, angry and resentful. ↩

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Digital News Landscape
  • Emerging Technology
  • Free Speech & Press
  • Future of the Internet (Project)
  • Misinformation
  • Misinformation Online
  • Online Harassment & Bullying
  • Online Privacy & Security
  • Platforms & Services
  • Privacy Rights
  • Social Media
  • Social Media & the News
  • Technology Adoption
  • Technology Policy Issues
  • Trust in Government
  • Trust, Facts & Democracy

5 key themes in Americans’ views about AI and human enhancement

Ai and human enhancement: americans’ openness is tempered by a range of concerns, what is machine learning, and how does it work, the longer and more often people use facebook, the more ad preferences the site lists about them, how does a computer ‘see’ gender, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

To revisit this article, visit My Profile, then View saved stories .

  • The Big Story
  • Newsletters
  • Steven Levy's Plaintext Column
  • WIRED Classics from the Archive
  • WIRED Insider
  • WIRED Consulting

If you buy something using links in our stories, we may earn a commission. Learn more.

Scrolling, Rickrolling, and Trolling

If you noodled around the internet long enough in the 1980s, sooner or later you’d get scrolled. It was a known hazard. You’d be innocently playing Colossal Cave Adventure , minding your own business, throwing axes at dwarfs , and someone on the network would flood your screen with ASCII.

Popular among scrollers was an ampersand-only rendering of the Old Man of the Mountain, the distinctive rock formation in northern New Hampshire. Maybe that was because our mainframe was also in New Hampshire, on the PDP-1-based Dartmouth Time-Sharing System , which had been implemented in 1963 by Basic authors John Kemeny and Thomas Kurtz. I’m glad the Old Man of the Mountain was memorialized—even briefly—on the ancient internet. Fifteen years ago, the cliff face in Franconia Notch collapsed; centuries of freezing and thawing had cracked the Old Man’s brow. Granite into ampersands into dust.

Quaint Reagan-era scrolling was a precursor to and eventually a subset of trolling, today’s modish rhetorical performance, which was invented in Usenet groups at the end of the ’80s. Judith Donath, a Harvard professor who has studied early internet deceptions, spotted a grassroots definition of trolling in 1995 on a wedding newsgroup, where a troll named Ultimatego had been upbraiding women for their vulgar wedding plans. Another user enlightened the upset brides. “Trolling is where you set your fishing lines in the water and then slowly go back and forth dragging the bait and hoping for a bite. Trolling on the Net is the same concept — someone baits a post and then waits for the bite on the line and then enjoys the ensuing fight.” (The fine points of the confounding trolling-trawling distinction are best left to fisherfolk, but in short: Trolling uses lines, and trawling uses nets.)

This early definition of trolling stands as a warning to the rest of us—potential fish. Where scrollers labored to bedevil one target and made their pranks clear, trolls cast indiscriminate lines for chumps and hide their intentions. They bait those fishing lines with what Donath calls a “pseudo-naive” idiom: “I was just asking!” Standing on that plausible deniability, they sit back and wait for tempers to flare.

After nearly 30 years, traditional newspapers have finally discovered trolling. They’re like retirees new to Adderall. Almost every week, editorialists at high-profile joints electrocute Twitter with a new your-liberal-views-are-vulgar sally. What’s more, they dress up this stuff as good-faith argument in a way that would have done Ultimatego proud.

Just this week in The New York Times , Bari Weiss defended a grab bag of thinkers , including Christina Hoff Sommers, the self-styled Factual Feminist, who makes barbed videos about discrimination against boys and women’s essential shortcomings in math. Hoff’s YouTube entries take stock trolling memes (domestic violence is exaggerated, Gamergate guys were kinda right, etc.) and aims them at the easily trolled, many of whom can be found on college campuses.

As college kids often serve as a proxy for the tenderhearted, ill-informed idealists in all of us, Weiss in her column seemed poised for a double axel: She could revel in Sommers’ trolling and troll her own readers when they bit the line on Twitter. But then the fish fought back, lighting into Weiss for a significant error. It all turned pretty bruising. One hopes Weiss still enjoyed the ensuing fight.

Fortunately, the content of these cyclical showdowns is mostly dated and immaterial. But it’s been depressing this year to discover how many serious journalists are new to the lulz—or what Mattathias Schwartz defined in 2008 as “the joy of disrupting another’s emotional equilibrium.” And newbies who can’t hold their lulz keep seeking higher doses. While it’s fun to call readers who dislike the Factual Feminist fascists (because it makes them freak out), it’s less fun to be cited for an error and forced to correct it. You then have to regain your own equilibrium enough to re-throw off the equilibrium of your trollees by calling them, maybe, double-fascists, and … what were we talking about again? Trolling is a nonadventure computer game. We should have stuck to Colossal Cave Adventure .

I miss getting scrolled. It felt a little like getting Rickrolled would feel 20 years later. It was a nuisance, and it was funny. Your whole screen shot up. Your messages, which were not saved or searchable in the DTSS “conferences” (think Slack on a WarGames interface), flew into the heavens. You did have to confront your vanity. My messages! My precious banter! In the last line, at the bottom of the screen, the vandal would gloat: YOU ARE THE VICTIM OF THE MAD SCROLLER !

The Shameful Controversy Over Olympic Boxer Imane Khelif

But scrolling was more than enough trolling for me. Humiliation came easily in middle school, and when the online masquerade turned spiteful, the antics of bona fide trolls brought a lump to my throat. The disingenuous questions of those days—“Hey, did you know a new study shows girls rape guys more than the reverse?”—were clearly a prelude to something I couldn’t handle, and yet, like a Bari Weiss reader on Twitter, I could hardly keep from responding. Not every time, but too often, I would get taken in, and tell myself that I was going to clarify something about, say, sexual violence, because as an American I appreciate free and open debate. Instead, I’d feel the walls closing in with every word I hotly typed about the fake new study, knowing with growing certainty I was making a fool of myself.

But how to know in the moment that you’re being trolled? I propose we look to the throat-lump. Trolling happens when a set of symbols hyperarouse the body. Ampersands can get you moderately worked up—about the invasion of your screen, the lost data, the realization that you’re a sitting duck—but in the end they’re ampersands. But what if, instead of that twisty symbol (a ligature of the letters “e” and “t,” invented in the first century), you confronted a more meaningful string like “ #metoo overreach ” or “ Trump’s a fantastic showman ”? You’d be put out. Cortisol might surge. As MIT’s blockbuster study of fake news revealed this week , people are hooked by stories that convulse them in outsized sensations: fear, disgust, surprise. That’s all too human. But what’s unique to trollees is that they seek to litigate the pain away with debating-society performances on Twitter. These reactive tweets amplify the pain and spread it.

Trolling editorialists know this cycle, though they often tell themselves they’re merely being daring or contrarian. That’s bad faith. Trolling, as a full-fledged genre, has been around way too long now for practitioners to affect innocence. Like other polemical styles—calls to arms, exploratory essays, screeds—trolling can even be put to good purpose, as in periods of depression and apathy, or when readers need to be viscerally provoked. (This is not one of those times; we seem anxious as hell.)

But most trolls these days are not Cassandras sounding serious alarms. What’s worse, they don’t even seem to be having fun. If the sense of persecution that editorialists like Bret Stephens and Bari Weiss express on Twitter is any indication, the lulz aren’t as sustaining as they once were. Maybe it’s time for the bold-faced trolls to try the quieter pleasures of rigor and originality.

There is a long history of polemical essays and newspaper columns that don’t make the body anxious; they make the brain work. Among them: David Moats’ 2001 quietly fierce editorials in the Rutland Herald that closed the case for same-sex marriage. Errol Morris’ 2008 New York Times op-ed about digitally altered news photographs, which argued “believing is seeing.” Katha Pollitt’s recent essay in The Nation about how l’Affaire Russe is inconvenient to the left.

You know you’re in the presence of a thought-provoking argument and not a troll when you don’t flinch, and instantly parry as if being lanced. Instead, you reflect. The mysterious ancient Hebrew word selah is one word for this response, which some believe is a musical notation akin to a fermata. It often comes at the end of psalms and means something like: Stop and think of that.

I feel palpably burned when trolled. Getting me to this spasmodic state is, of course, a troll’s sole purpose. The other day, a friend on Facebook called those who support the Mueller investigation “Russophobes” and my hands flew to the keyboard as if possessed by those letters: R-u-s-s-o-p-h-o-b-e. I’d been called fearful! My honor was at stake! I shot off bitchy link to the original Mueller indictments of 13 Russian entities. What were they indicted for? Their damned trolling. I’d won! Or. No I hadn’t. I still felt sick and revved up.

I wish I’d kept still for a beat or two, maybe even tried some kind of mindfulness magic. In a world of trolls in columnists’ clothing, we need to take a breath when physically aroused by an article or a tweet. It’s time to see this stuff clogging our op-ed pages for what it is—the nuisance of ampersands. Letters, ligatures, and symbols best shrugged off. ¯\_(ツ)_/¯

  • Meet the woman taking on Russia's trolling machine .
  • The New York Times fired a tech writer for her unsavory Twitter past—and it's complicated.
  • Germany might just be the world's safest social media state .

Photograph by WIRED/Getty Images

essay about internet trolls

Traits of a troll: BYU research examines motives of internet trolling

As social media and other online networking sites have grown in usage, so too has trolling – an internet practice in which users intentionally seek to draw others into pointless and, at times, uncivil conversations.

New BYU research recently published in the journal of Social Media and Society sheds light on the motives and personality characteristics of internet trolls.

Through an online survey completed by over 400 Reddit users, the study found that individuals with dark triad personality traits (narcissism, Machiavellianism, psychopathy) combined with schadenfreude – a German word meaning that one derives pleasure from another’s misfortune – were more likely to demonstrate trolling behaviors.

“People who exhibit those traits known as the dark triad are more likely to demonstrate trolling behaviors if they derive enjoyment from passively observing others suffer,” said Dr. Pamela Brubaker, BYU public relations professor and co-author of the study. “They engage in trolling at the expense of others.”

The research, which was co-authored by BYU communications professor Dr. Scott Church and former BYU graduate Daniel Montez, found that individuals who experienced pleasure from the failures or shortcomings of others considered trolling to be acceptable online behavior. Women who participated in the survey viewed trolling as dysfunctional while men were more likely to view it as functional.

“This behavior may happen because it feels appropriate to the medium,” said Church. “So, heavy users of the platform may feel like any and all trolling is ‘functional’ simply because it’s what people do when they go on Reddit.”

Internet trolls 2

The researchers say it’s important to note that those who possess schadenfreude often consider trolling to be a form of communication that enriches rather than impedes online deliberation. Because of this view, they’re not concerned with how their words or actions affect those on the other side of the screen. To them, trolling isn’t perceived as destructive but merely as a means for dialogue to take place.

“They are more concerned with enhancing their own online experience rather than creating a positive online experience for people who do not receive the same type of enjoyment or pleasure from such provocative discussions,” said Brubaker.

However, there’s still hope for productive online discussions. The study found no correlation between being outspoken online and trolling behavior. The findings noted that users who actively “speak out” and voice their opinions online didn’t necessarily engage in trolling behaviors. Such results are encouraging and suggest that civil online discourse is attainable.

“Remember who you are when you go online,” said Church. “It helps when we think of others online as humans, people with families and friends like you and me, people who feel deeply and sometimes suffer. When we forget their identities as actual people, seeing them instead as merely usernames or avatars, it becomes easier to engage in trolling.”

Brubaker suggests approaching online discourses with an open mind in order to understand various perspectives.

“Digital media gives us the power to connect with people who have similar and different ideas, interests, and experiences from our own. As we connect with people online, we should strive to be more respectful of others and other points of view, even when another person’s perspective may not align with our own,” she said. “Each of us has the power to be an influence for good online. We can do this by exercising mutual respect. We can build others up and applaud the good online.”

Internet Trolls 1

The Atlantic

Deseret news, ksl newsradio, fox 13 news, florida news times, digital information world, related articles.

Felix 1

Championing heritage: BYU student Naloni Felix helps bring Native American stories to schools

Tyler Stahle

Notable achievements and scholarship wins highlight BYU awards season 2024

Kyla CDT

Discovering divine potential through dance, BYU student is set for the stages of NYC

  • Intellectual Affairs

Under the Bridge

By  Scott McLemee

You have / 5 articles left. Sign up for a free account or log in.

Last week , the journal First Monday – a prominent venue for scholarly research concerning the Internet – published a paper called “LOLing at tragedy: Facebook trolls, memorial pages, and resistance to grief online.” The author, Whitney Phillips, is a graduate student in English at the University of Oregon. The shooting at Virginia Tech came three days later. Talk about an unhappy coincidence…. The paper itself is smart, and written with more verbal flair than the prospect of peer review normally inspires. It is required reading for anyone trying to come to terms with the strange and sometimes ghastly ways people now respond to horrible news. 

“RIP trolling” targets the webpages set up to commemorate death and disaster -- defacing them with comments and images intended to offend or enrage visitors to the page. There is no law against being a creep, as such, but disgust at RIP trolling has inspired efforts to work around that fact. In Australia, for example, one Bradley Paul Hampson recently received a three-year sentence for the graphics he put up on Facebook pages devoted to two murdered schoolchildren.

According to an article in the Queensland Courier-Mail, he posted “photographs of one victim with a penis drawn near their mouth and highly offensive messages, including ‘Woot I'm Dead', ‘Had It Coming' and others too offensive to publish.” Hampson was arrested for the possession and distribution of child pornography. But it seems more fitting to call him “the first person to be charged and convicted of Facebook vandalism,” as the newspaper’s caption writer did, beneath a photo of Hampson almost certainly chosen because of his smirk.

Whitney Phillips’s interest in the topic is ethnographic and analytic, not prosecutorial. Her graduate work in English has a “structured emphasis in folklore,” and she’s as scrupulously non-judgmental about RIP trolling as Alan Lomax would have been about the morality expressed in a murder ballad. The article in First Monday is based on research for her dissertation, now in progress, which is called “Internet Trolling: Cultural Digestion, Lulz, and the Politics of Transgression.” The word "lulz" in the subtitle is a bit of in-group slang, to be discussed below in due course. (It's a troll thing, you wouldn't understand.) 

After reading the article , but before getting in touch with its author, I went to Facebook expecting to locate the inevitable Virginia Tech Shooting 2011 memorial page(s) -- and the no less inevitable defacement. But searching for “Virginia Tech shooting” yielded no hits. A little more exploration turned up a page called R.I.P Virginia Tech Dec 8 2011 . It had been created almost immediately after the news came out, but there was no subsequent activity. For that matter, no administrators for it were listed.

“If the shooting had happened a year ago,” Phillips said when we spoke by phone, “there would have been 50 pages on it. There’s been a pushback from Facebook since then. The algorithms are keeping the space as safe as possible. They shut things down before they even exist, almost.”

The vigilance makes her research more difficult. In a blog post , Phillips writes that when a tragedy occurs she has to rush to her computer to document the troll response “because this shit isn’t going to archive itself.” One remarkable thing about the bibliography of her paper is that it lists numerous webpages with a parenthetical “since deleted” following the title.

The constant erasure of trollic discourse, if you’ll pardon the expression, is part of the dynamic that Phillips is studying. In a sense, it is a much softer form of the policing that landed Bradley Paul Hampson in jail. Phillips indicates in conversation that she has experienced such policing firsthand: at one point, Facebook shut down her account for abusive behavior -- although all she’d done, she says, was “friend” various trolls and observe their behavior. The vigilant FB algorithms took this to be complicity. Phillips appealed the decision, making clear that she was engaged in research, and won reinstatement. But since then, she’s shut down her profile out of misgivings over Facebook’s role as a platform devoted to generating money out of identity in ways over which users have little control.

According to Wikipedia, which in turn cites the Oxford English Dictionary , the earliest confirmed reference to trolling (in the sense of a kind of Internet behavior) dates from 1992. I try not to argue with the OED any more than necessary.  That seldom goes well. But in this case, it is simply wrong. A search of the Usenet archives shows one person accusing another of “trolling for commentary” no later than 1986 . By 1989, somebody responds to a comment with “Trolling for abuse, Eliot? Or is this some weird self-immolating postmodernist gesture?”

One folk etymology has it that trolling is a variant of “trawling”: pulling a net or a baited line from the back of a boat to capture fish. The other, considerably more common explanation is that the noun came first, with “troll” as an insulting label for the Usenet provocateur. The image of an irritable creature living under a bridge in turn gave rise to the injunction, “Don’t feed the trolls” (i.e., don’t let yourself be provoked because that’s what nourishes them), which then caught on in the blogosphere at some point in the early ‘00s, albeit without much success in thinning the population.

The old-school Usenet troll sometimes posted under his real name, and he tended to act alone. But the species has undergone a significant mutation, according to Phillips, who thinks the new breed came into its own in the mid-‘00s. The contemporary troll always uses a pseudonym, and usually more than one -- and keeps a number of email accounts and Facebook profiles in case he is banned, which happens a lot. Any serious troll is a past master at cloaking or disguising his Internet service provider.

Concealment, then, is essential. At the same time, the notion that trolls are antisocial is misleading: A crucial point about the new sort is that they interact with one another, form friendships, and work together.  Phillips says she has interacted with certain trolls for three years now -- but still does not know their real names or even, with any certainty, in what part of the world they are located.

Her observations and interviews across that period suggest that trolls constitute a subculture, with its own distinct tradition, lingo, and outlook. They have gathering points and networks; they have ways to recognize one another even when obliged to change pseudonyms. While RIP trolling has generated media attention and moral panic, their influence is both broader and less obvious. Phillips told me that trolling is “both ubiquitous and invisible” and “permeates the online ecosystem” in ways that outsiders tend not to recognize.

Among the phenomena with “ties, and often direct ties, to trolling” that she listed in an e-mail note are “ LOLcats , RickRolling [in which individuals are unwittingly redirected to a clip of Rick Astley’s ‘Never Gonna Give You Up’ ], ‘hactivism,’ Anonymous, the Guy Fawkes mask, [and] half the memes on Reddit, the list goes on.”

Trolls are in it for the lulz , and they take getting b& in stride.

Clearly a little translation is in order. It is simple enough to figure out what b& means. Just pronounce it: “banned” – enough of an occupational hazard to merit a shorthand expression. But "lulz" takes a bit of unpacking. While derived from the familiar interjection LOL, for “laugh out loud,” lulz carries a special in-group nuance. Lulz refers to “a particular kind of unsympathetic, ambiguous laughter similar to schadenfreude,” explained Phillips by email. “Unlike schadenfreude, however, which is often described in passive terms (a bad thing happened to someone I don't like, so I laughed), lulz are much more active, or at the very least imply the vicarious enjoyment of others' direct actions (I made a guy so mad he started typing in all caps, so I laughed and/or I saw someone else make a guy so mad he started typing in all caps, so I laughed).”

Well, everybody needs an ethos, I guess, and a case might even be made for understanding the troll as some kind of trickster figure.

Still, it’s hard to see RIP trolling as anything more than the blend of sadism and cowardice. The lulz of making jokes about, say, a teenager’s suicide involve all the satisfactions of inflicting psychic violence at random, with none of the inconvenience of swallowing your own teeth when somebody punches you in the face repeatedly. 

Based on interviews with trolls, though, Phillips says that there is more to it than vicious misanthropy – that at least some of them have a very specific agenda, and a moral code, of sorts. They do not violate pages set up by family of the deceased, and don’t mean to hurt them.

What angers and disgusts them, she says, is how the media will pick out certain deaths or catastrophes and do saturation coverage – after which, people rush to set up online memorials that then draw “grief tourists.” The latter are people “who have no real–life connection to the victim,” explains Phillips in her paper, "and who, according to the trolls, could not possibly be in mourning. As far as trolls are concerned, grief tourists are shrill, disingenuous and, unlike grieving friends and families, wholly deserving targets.”

One way of trolling is to set up a FB memorial for a nonexistent dead person and then mock the visitors who soon turn up. “This isn’t grief,” Phillips quotes one troll as arguing. “This is boredom and a pathological need for attention masquerading as grief.”

By this logic , the point of RIP trolling is to disrupt -- or at least challenge -- the sensationalism, narcissism, and vapid communitarian sentimentality fostered by the 24-hour cable news cycle and social networking. They subject anyone who gets caught up in all of it to scathing laughter. Possibly this will be for the lemmings’ own good. Their rage might be a first step towards learning the difference between phony emotion and meaningful experience. Or maybe it will give them a heart attack. It’s lulz either way.

Phillips presents as strong a case for this interpretation as can be made, perhaps -- while also analyzing the dissociation between online and real-life identity that allows trolls to avoid thinking about the collateral damage to bereaved family members and friends of the deceased. I read the paper with interest, but also with the nagging thought that RIP trolling provides a critique of the contemporary media in roughly the sense that lynching offered one of the criminal justice system. In either case, malice is more evident than principle.

On the other hand, trollery includes RickRolling, which never hurt anybody, apart from getting that song stuck in people’s heads repeatedly. Trolling covers a multitude of activities -- most of them irritating, though not actually sociopathic.

The thought that there might be a million trolls in the United States, as Phillips thinks is possible, seems…what? Perplexing? Certainly that. But also a complicating factor in all sorts of ways. A troll is Mark Zuckerberg’s sinister twin, cloned to infinity.

And since Whitney Phillips has thought about the phenomenon more than anyone, she should have the last word. After listening to my misgivings by phone, she wrote

“Although I fully understand the impulse to denounce trolls and trolling behaviors (ironically, trolls actively pursue this very response), I would simultaneously argue that trolling has much more to offer, and much more to say, than critics might realize. My basic argument — although it is an argument riddled with caveats and qualifications — is that, contrary to the assumption that they represent all that is terrible about human nature, about anonymity, and about the internet generally, trolls also perform an important cultural function ….. they take whatever they find, the good, the bad, the hateful, the creative, the hypocritical, the amusing, anything and everything else, and put it on display.

“Sometimes they do this purposefully, with political intent. Sometimes they do things simply because they can. Either way, by mapping trolls’ behaviors, it is possible to similarly map trends and tensions within the host culture — the byproducts of which trolls consume, recombine, and eagerly hurl back at an unsuspecting populace. Tell me what the trolls are doing, in other words, and I’ll tell you about the world they live in."

Illustration of a group of four diverse people sitting around a table happily engaged

Navigating the Postdoc Office

Victoria Hallinan and Karena Nguyen describe how working in one can offer a fruitful career path and some of the spec

Share This Article

More from intellectual affairs.

The book cover for "The Academic Trumpists: Radicals Against Liberal Diversity," authored by David L. Swartz, with Nicholas Rodelo.

The Academic Trumpists, Part 2

Scott McLemee concludes his review of David L. Swartz's study of pro-Trump academics.

The Academic Trumpists, Part 1

Scott McLemee reviews David L. Swartz’s study of the academics who support Donald Trump.

The cover of Kathryn Hughes's book "Catland: Louis Wain and the Great Cat Mania," featuring a drawing of the face of a cat.

Wain’s World

Scott McLemee reviews Kathryn Hughes’s biography of Louis Wain, Catland .

  • Become a Member
  • Sign up for Newsletters
  • Learning & Assessment
  • Diversity & Equity
  • Career Development
  • Labor & Unionization
  • Shared Governance
  • Academic Freedom
  • Books & Publishing
  • Financial Aid
  • Residential Life
  • Free Speech
  • Physical & Mental Health
  • Race & Ethnicity
  • Sex & Gender
  • Socioeconomics
  • Traditional-Age
  • Adult & Post-Traditional
  • Teaching & Learning
  • Artificial Intelligence
  • Digital Publishing
  • Data Analytics
  • Administrative Tech
  • Alternative Credentials
  • Financial Health
  • Cost-Cutting
  • Revenue Strategies
  • Academic Programs
  • Physical Campuses
  • Mergers & Collaboration
  • Fundraising
  • Research Universities
  • Regional Public Universities
  • Community Colleges
  • Private Nonprofit Colleges
  • Minority-Serving Institutions
  • Religious Colleges
  • Women's Colleges
  • Specialized Colleges
  • For-Profit Colleges
  • Executive Leadership
  • Trustees & Regents
  • State Oversight
  • Accreditation
  • Politics & Elections
  • Supreme Court
  • Student Aid Policy
  • Science & Research Policy
  • State Policy
  • Colleges & Localities
  • Employee Satisfaction
  • Remote & Flexible Work
  • Staff Issues
  • Study Abroad
  • International Students in U.S.
  • U.S. Colleges in the World
  • Seeking a Faculty Job
  • Advancing in the Faculty
  • Seeking an Administrative Job
  • Advancing as an Administrator
  • Beyond Transfer
  • Call to Action
  • Confessions of a Community College Dean
  • Higher Ed Gamma
  • Higher Ed Policy
  • Just Explain It to Me!
  • Just Visiting
  • Law, Policy—and IT?
  • Leadership & StratEDgy
  • Leadership in Higher Education
  • Learning Innovation
  • Online: Trending Now
  • Resident Scholar
  • University of Venus
  • Student Voice
  • Academic Life
  • Health & Wellness
  • The College Experience
  • Life After College
  • Academic Minute
  • Weekly Wisdom
  • Reports & Data
  • Quick Takes
  • Advertising & Marketing
  • Consulting Services
  • Data & Insights
  • Hiring & Jobs
  • Event Partnerships

4 /5 Articles remaining this month.

Sign up for a free account or log in.

  • Sign Up, It’s FREE

Personality and internet trolling: a validation study of a Representative Sample

  • Open access
  • Published: 29 April 2023
  • Volume 43 , pages 4815–4818, ( 2024 )

Cite this article

You have full access to this open access article

essay about internet trolls

  • Evita March   ORCID: orcid.org/0000-0003-3633-8815 1 ,
  • Liam McDonald 2 &
  • Loch Forsyth 3  

3967 Accesses

2 Citations

39 Altmetric

Explore all metrics

To date, characteristics of the internet “troll” have largely been explored in general community samples, which may lack representation of the sample of interest. In this brief report, we aimed to evidence the role of gender and the personality traits of sadism, psychopathy, extraversion, conscientiousness, and agreeableness in a sample of individuals who self-report having perpetrating trolling behaviours. Participants ( N  = 163; 50.3% women; Mage  = 27.35, SD  = 8.78) were recruited via social media advertisements and completed an anonymous online questionnaire. The variables explained 55.5% of variance in trolling. We found self-reported trolls were more likely to be men and have higher psychopathy, higher sadism, and lower agreeableness. Findings of this representative sample have implications for understanding, managing, and preventing this antisocial online behaviour.

Similar content being viewed by others

essay about internet trolls

Internet Gaming Disorder in the DSM-5: Personality and Individual Differences

Personality and internet gaming disorder: a systematic review of recent literature.

essay about internet trolls

Meta-analysis of associations between five-factor personality traits and problematic social media use

Avoid common mistakes on your manuscript.

Internet trolling (“trolling”) is an antisocial online behaviour characterised by posting inflammatory, provocative comments with the intention of upsetting others (Buckels et al., 2014 ). As experiencing trolling can have significant psychological impact (Coles & West, 2016 ), management and prevention of this online behaviour is critical. Understanding individual differences associated with trolling, such as gender and personality traits, can inform development of appropriate interventions (March, 2019 ).

Although women engage in other antisocial online behaviour more than men (e.g., intimate partner cyberstalking; March et al., 2022 ), men are more likely to perpetrate trolling (Buckels et al., 2014 ). Possibly, this is due to their more antisocial use of social media (Howard et al., 2019 ), their higher dominance and competitiveness (March & Steele, 2020 ), and viewing trolling as having a functional purpose (Brubaker et al., 2021 ). People with higher subclinical psychopathy and sadism also perpetrate more trolling, likely due to their lower empathy and a deceitful interpersonal style (i.e., psychopathy; March & Steele, 2020 ), and their enjoyment of hurting others (i.e., sadism; March, 2019 ). People with lower agreeableness, lower conscientiousness (Buckels et al., 2014 ), and higher extraversion (Zezulka & Seigfried-Spellar, 2016 ) also perpetrate more trolling – although extraversion results are mixed. Here, trolling may be a product of increased antagonism (i.e., low agreeableness), increased impulsivity (i.e., low conscientiousness), assertiveness and a need to establish social status (i.e., higher extraversion; Zezulka & Seigfried-Spellar, 2016 ).

In the current study, we aimed to replicate these findings in a representative sample of self-reported “trolls”. As previous studies have largely assessed tendencies to troll (Zezulka & Seigfrield-Spellar, 2016 ) and agreement with trolling behaviours (March, 2019 ) in community samples, the validity of results may be questionable. Although someone might agree with trolling behaviours (e.g., the GAIT; Buckels et al., 2014 ), this agreement may not translate to actual behaviour. We predicted that men would be more likely than women to troll, and that people with higher sadism, higher psychopathy, lower agreeableness, lower conscientiousness, and higher extraversion, would perpetrate more trolling.

After receiving institution ethical approval, participants were recruited to complete a voluntary, anonymous online questionnaire via social media (e.g., Facebook, Reddit, and Instagram) advertisements. After providing informed consent, participants ( N  = 663) were presented with an operational definition (see supplementary material) of trolling, followed by the inclusion question, “do you troll other people/groups online?” Only those who answered yes ( N  = 266; 40.12%) could commence the questionnaire. The questionnaire took approximately 20–30 min to complete and included an attention check. We also included an open-ended question to screen for inattention and assessed start/finish times for outliers.

The final sample ( N  = 163; 50.3% women) completed all measures (See Table  1 ) and satisfied attention checks. Participants were aged 18–62 years ( Mage  = 27.36, SD  = 8.78), predominantly heterosexual (75.5%) and Australian (63.8%). An a priori power calculation indicated a minimum sample size of 153 (effect size = 0.15, alpha = 0.05, 95% power) required for statistical power. The study was a correlational, cross-sectional design, and based on research recommendations we controlled for socially desirable responding (see March, 2019 ).

Assumptions of linearity, homoscedasticity, normality, and multicollinearity were screened and met. Table  2 presents total and gendered descriptive and bivariate correlations. To test the hypothesis, a 2-Step Hierarchical Multiple Regression Analysis was run (see Table  3 ). The total model explained 55.5% of variance in trolling, R²=0.56, F (7,149) = 26.59, p  < .001, with a large effect size of ƒ²=1.27.

In this brief report, we aimed to evidence the utility of gender and personality to predict trolling in a sample of self-reported trolls. Corroborating past findings of general population samples (Buckels et al., 2014 ; Craker & March, 2016 ; Sest & March, 2017 ), men were more likely than women to troll, and trolling was predicted by lower empathy and a deceitful interpersonal style (i.e., higher psychopathy), enjoyment harming others (i.e., higher sadism), and antagonism (i.e., lower agreeableness).

Results also demonstrate some key differences in general population samples and representative samples. Conscientiousness did not predict trolling, a finding contrary to the hypothesis and to general population samples (Buckels et al., 2014 ). However, as trolling is deliberate (Sest & March, 2017 ), and can even include strategic coordination (Etudo et al., 2019 ), it follows that trolls may not display low conscientiousness. Further, extraversion did not predict trolling, a finding also contrary to previous research – though extraversion findings have been mixed (see Buckels et al., 2014 ; Gylfason et al., 2021 ; and Zezulka & Seigfried-Spellar, 2016 ), possibly due to the different measures employed to assess extraversion. Still, extraversion shares an inverse relationship with forms of deception (McArthur et al., 2022 ), and as deception is a key characteristic of trolling (Hardaker, 2010 ), the nonsignificant extraversion finding is logical. However, it is worth noting that extraversion was a significant predictor until sadism and psychopathy were entered, indicating either (1) sadism and psychopathy capture the variance explained by extraversion, or (2) the strength of sadism and psychopathy rendered extraversion nonsignificant.

We conclude that general population samples may somewhat, but not entirely, capture the psychological profile of trolls. Interventions seeking to target trolling behaviours should consider the deliberate and strategic nature of trolling; the troll may not be a reactionary, impulsive aggressor (i.e., low conscientiousness). Although the modest sample size of the current study somewhat limits generalisability, there was adequate statistical power and a large effect size (ƒ² = 1.27) was large. Future research exploring trolling could seek to apply theoretical frameworks to understand this antisocial use of social media, such as Uses and Gratifications theory (see Geary et al., 2021 ). Further, as trolls are often diverse in their purpose and goals (Sanfilippo et al., 2017 ), future research could focus on examining the individual differences of trolls who vary by goals, strategies, and platform.

Data availability

The datasets generated during and analysed during the current study are available in the figshare repository, https://figshare.com/s/d54cd377d52df93ab37d .

Brubaker, P. J., Montez, D., & Church, S. H. (2021). The power of shadenfreude: Predicting behaviors and perceptions of trolling among Reddit users. Social Media + Society , 1–13. https://doi.org/10.1177/20563051211021382

Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences , 67 , 97–102. https://doi.org/10.1016/j.paid.2014.01.016

Article   Google Scholar  

Coles, B., & West, M. (2016). Trolling the trolls: Online forum users constructions of the nature and properties of trolling. Computers in Human Behavior , 60 , 233–244. https://doi.org/10.1016/j.chb.2016.02.070

Craker, N., & March, E. (2016). The dark side of Facebook®: The Dark Tetrad, negative social potency, and trolling behaviours. Personality and Individual Differences , 102 , 79–84. https://doi.org/10.1016/j.paid.2016.06.043

Etudo, U., Yoon, V. Y., & Yaraghi, N. (2019, January). From Facebook to the streets: Russian troll ads and Black Lives Matter protests. In Proceedings of the 52nd Hawaii International Conference on System Sciences .

Geary, C., March E., & Grieve, R. (2021) Insta-identity: Dark personality traits as predictors of authentic self-presentation on Instagram. Telematics and Informatics 63. https://doi.org/10.1016/j.tele.2021.101669

Gylfason, H. F., Sveinsdottir, A. H., Vésteinsdóttir, V., & Sigurvinsdottir, R. (2021). Haters gonna hate, trolls gonna troll: The personality profile of a Facebook troll. International Journal Environmental Research and Public Health , 18 (5722), 1–11. https://doi.org/10.3390/ijerph18115722

Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Journal of Politeness Research , 6 (2), 215–242. https://doi.org/10.1515/jplr.2010.011

Howard, K., Zolnierek, K. H., Critz, K., Dailey, S., & Ceballos, N. (2019). An examination of psychosocial factors associated with malicious online trolling behaviors. Personality and Individual Differences , 149 , 309–314. https://doi.org/10.1016/j.paid.2019.06.020

John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The big five inventory: Versions 4a and 54 . Berkeley: University of California.

Google Scholar  

Jones, D. N., & Paulhus, D. L. (2014). Introducing the short dark triad (SD3) a brief measure of dark personality traits. Assessment, 21 (1), 28–41. https://doi.org/10.1177/1073191113514105

March, E. (2019). Psychopathy, sadism, empathy, and the motivation to cause harm: New evidence confirms malevolent nature of the internet troll. Personality and Individual Differences , 141 , 133–137. https://doi.org/10.1016/j.paid.2019.01.001

March, E., & Steele, G. (2020). High esteem and hurting others online: Trait sadism moderates the relationship between self-esteem and internet trolling. Cyberpsychology Behavior and Social Networking , 23 , 441–446. https://doi.org/10.1089/cyber.2019.0652

Article   PubMed   Google Scholar  

March, E., Szymczak, P., Di Rago, M., & Jonason, P. K. (2022). Passive, invasive, and duplicitous: Three forms of intimate partner cyberstalking. Personality and Individual Differences , 189 , 111502. https://doi.org/10.1016/j.paid.2022.111502

McArthur, J., Jarvis, R., Bourgeois, C., & Ternes, M. (2022). Lying motivations: Exploring personality correlates of lying and motivations to lie. Canadian Journal of Behavioural Science / Revue canadienne des sciences du comportement . Advance online publication. https://doi.org/10.1037/cbs0000328

O’Meara, A., Davies, J., & Hammond, S. (2011). The psychometric properties and utility of the short sadistic impulse scale (SSIS). Psychological Assessment , 23 (2), 523–531. https://doi.org/10.1037/a0022400

Reynolds, W. M. (1982). Development of reliable and valid short forms of the Marlowe-Crowne Social Desirability Scale. Journal of Clinical Psychology , 38 (1), 119–125.

Sanfilippo, P., Yang, S., & Fichman, P. (2017). Managing online trolling: From deviant to social and political trolls Proceedings of the 50th Hawai’i International Conference on System Sciences (HICSS-50). Los Alamitos: IEEE Press.

Sest, N., & March, E. (2017). Constructing the cyber-troll: Psychopathy, sadism, and empathy. Personality and Individual Differences , 119 , 69–72. https://doi.org/10.1016/j.paid.2017.06.038

Zezulka, L. A., & Seigfried-Spellar, K. C. (2016). Differentiating cyberbullies and internet trolls by personality characteristics and self-esteem. Journal of Digital Forensics Security and Law: Special Issue on Cyberharassment , 11 (3), 7–26. https://doi.org/10.15394/jdfsl.2016.1415

Download references

Open Access funding enabled and organized by CAUL and its Member Institutions

Author information

Authors and affiliations.

Institute of Health and Wellbeing, Federation University, 100 Clyde Road, Berwick, VIC, 3806, Australia

Evita March

Ballarat Psychology Clinic, Ballarat, Australia

Liam McDonald

School of Psychology, Deakin University, Geelong, Australia

Loch Forsyth

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Evita March .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Compliance with ethical standards

The authors declare no potential conflicts of interest. All research was conducted in accordance with ethical standards and the project was approved by the [blinded] Human Research Ethics Committee. Participants provided informed consent to participate.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

March, E., McDonald, L. & Forsyth, L. Personality and internet trolling: a validation study of a Representative Sample. Curr Psychol 43 , 4815–4818 (2024). https://doi.org/10.1007/s12144-023-04586-1

Download citation

Accepted : 19 March 2023

Published : 29 April 2023

Issue Date : February 2024

DOI : https://doi.org/10.1007/s12144-023-04586-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Personality
  • Find a journal
  • Publish with us
  • Track your research

Want a daily email of lesson plans that span all subjects and age groups?

Inside the bizarre world of internet trolls and propagandists - andrew marantz.

195,138 Views

1,210 Questions Answered

Let’s Begin…

Journalist Andrew Marantz spent three years embedded in the world of internet trolls and social media propagandists, seeking out the people who are propelling fringe talking points into the heart of conversation online. Go down the rabbit hole of online propaganda and misinformation— and learn we can start to make the internet less toxic.

About TED Talk Lessons

TED Talk Lessons are created by TED-Ed using phenomenal TED Talks. Do you have an idea for a lesson? Create it now using any video from YouTube »

Meet The Creators

  • Video created by TED
  • Lesson Plan created by Lauren McAlpine

More from Troubleshooting the World

essay about internet trolls

A street librarian's quest to bring books to everyone - Storybook Maze

Lesson duration 08:44

37,832 Views

essay about internet trolls

Picture a perfect society. What does it look like?

Lesson duration 05:55

222,221 Views

essay about internet trolls

When AI can fake reality, who can you trust? - Sam Gregory

Lesson duration 12:05

127,507 Views

essay about internet trolls

The growing megafire crisis — and how to contain it - George T. Whitesides

Lesson duration 10:42

57,047 Views

Welcome to Bio-X

Welcome to Bio-X

  • Bio-X History
  • Map & Directions
  • Dining Options
  • Room Scheduling
  • General Facilities Issues
  • Urgent Facilities Issues
  • Non-Emergency Facilities Requests
  • Building Access Request
  • Shared Equipment
  • Diversity, Equity, Inclusion
  • Browse Seed Grants
  • Visiting Scholars/Visiting Postdocs
  • PhD Fellows
  • Undergraduate Research
  • NeuroVentures
  • Travel Awards
  • Research Partners
  • Browse Videos
  • Browse All Research
  • Clark Center @ 10x Video
  • Bio-X in the News
  • Stanford research shows that anyone can become an Internet troll
  • USRP Faculty Talks
  • Symposium Lectures
  • Additional Videos
  • Upcoming Events
  • Talk Videos
  • Stanford Bio-X Frontiers in Interdisciplinary Biosciences: 2019/2020
  • Courses & Workshops
  • Alumni & Friends
  • Partnership Models
  • Benefits of Partnership
  • Corporate Member Projects
  • Corporate Forum Newsletter Archive
  • Stanford Bio-X White Paper

Photo of a man using a computer in the dark, showing only his hand on the keyboard and not his face.

Photo by icsnaps , Shutterstock: Under the right circumstances, just about anybody can become an Internet troll, according to Stanford research.

Stanford News - February 6th, 2017 - by Taylor Kubota

Internet trolls, by definition, are disruptive, combative and often unpleasant with their offensive or provocative online posts designed to disturb and upset.

The common assumption is that people who troll are different from the rest of us, allowing us to dismiss them and their behavior. But research from Stanford University and Cornell University, published as part of the upcoming 2017 Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017), suggests otherwise. The research offers evidence that, under the right circumstances, anyone can become a troll.

“We wanted to understand why trolling is so prevalent today,” said Justin Cheng, a computer science researcher at Stanford and lead author of the paper. “While the common knowledge is that trolls are particularly sociopathic individuals that occasionally appear in conversations, is it really just these people who are trolling others?”

Taking inspiration from social psychology research methods, Cheng investigated whether trolling behavior is an innate characteristic or if situational factors can influence people to act like trolls. Through a combination of experimentation, data analysis and machine learning, the researchers honed in on simple factors that make the average person more likely to troll.

Becoming a troll

Following previous research on antisocial behavior, the researchers decided to focus on how mood and context affect what people write on a discussion forum. They set up a two-part experiment with 667 subjects recruited through a crowdsourcing platform.

In the first part of the experiment, the participants were given a test, which was either very easy or very difficult. After taking the tests, all subjects filled out a questionnaire that evaluated various facets of their mood, including anger, fatigue, depression and tension. As expected, the people who completed the difficult test were in a worse mood than those who had the easy test.

All participants were then instructed to read an article and engage in its comment section. They had to leave at least one comment, but could leave multiple comments and up-votes and down-votes and could reply to other comments. All participants saw the same article on the same platform, created solely for the experiment, but some participants were given a forum with three troll posts at the top of the comment section. Others saw three neutral posts.

Two independent experts evaluated whether the posts left by subjects qualified as trolling, defined generally in this research by a combination of posting guidelines taken from several discussion forums. For example, personal attacks and cursing were indicative of troll posts.

About 35 percent of people who completed the easy test and saw neutral posts then posted troll comments of their own. That percentage jumped to 50 percent if the subject either took the hard test or saw trolling comments. People exposed to both the difficult test and the troll posts trolled approximately 68 percent of the time.

The spread of trolling

To relate these experimental insights to the real world, the researchers also analyzed anonymized data from CNN’s comment section from throughout 2012. This data consisted of 1,158,947 users, 200,576 discussions and 26,552,104 posts. This included banned users and posts that were deleted by moderators. In this part of the research, the team defined troll posts as those that were flagged by members of the community for abuse.

It wasn’t possible to directly evaluate the mood of the commenters, but the team looked at the time stamp of posts because previous research has shown that time of day and day of week correspond with mood. Incidents of down-votes and flagged posts lined up closely with established patterns of negative mood. Such incidents tend to increase late at night and early in the week, which is also when people are most likely to be in a bad mood.

The researchers investigated the effects of mood further and found that people were more likely to produce a flagged post if they had recently been flagged or if they had taken part in a separate discussion that merely included flagged posts written by others. These findings held true no matter what article was associated with the discussion.

“It’s a spiral of negativity,” explained Jure Leskovec , associate professor of computer science at Stanford and senior author of the study. “Just one person waking up cranky can create a spark and, because of discussion context and voting, these sparks can spiral out into cascades of bad behavior. Bad conversations lead to bad conversations. People who get down-voted come back more, comment more and comment even worse.”

Predicting bad behavior

As a final step in their research, the team created a machine-learning algorithm tasked with predicting whether the next post an author wrote would be flagged.

The information fed to the algorithm included the time stamp of the author’s last post, whether the last post was flagged, whether the previous post in the discussion was flagged, the author’s overall history of writing flagged posts and the anonymized user ID of the author.

The findings showed that the flag status of the previous post in the discussion was the strongest predictor of whether the next post would be flagged. Mood-related features, such as timing and previous flagging of the commenter, were far less predictive. The user’s history and user ID, although somewhat predictive, were still significantly less informative than discussion context. This implies that, while some people may be consistently more prone to trolling, the context in which we post is more likely to lead to trolling.

Troll prevention

Between the real-life, large-scale data analysis, the experiment and the predictive task, the findings were strong and consistent. The researchers suggest that conversation context and mood can lead to trolling. They believe this could inform the creation of better online discussion spaces.

“Understanding what actually determines somebody to behave antisocially is essential if we want to improve the quality of online discussions,” said Cristian Danescu-Niculescu-Mizil, assistant professor of information science at Cornell University and co-author of the paper. “Insight into the underlying causal mechanisms could inform the design of systems that encourage a more civil online discussion and could help moderators mitigate trolling more effectively.”

Interventions to prevent trolling could include discussion forums that recommend a cooling-off period to commenters who have just had a post flagged, systems that automatically alert moderators to a post that’s likely to be a troll post or “shadow banning,” which is the practice of hiding troll posts from non-troll users without notifying the troll.

The researchers believe studies like this are only the beginning of work that’s been needed for some time, since the Internet is far from being the worldwide village of cordial debate and discussion people once thought it would become.

“At the end of the day, what this research is really suggesting is that it’s us who are causing these breakdowns in discussion,” said Michael Bernstein , assistant professor of computer science at Stanford and co-author of the paper. “A lot of news sites have removed their comments systems because they think it’s counter to actual debate and discussion. Understanding our own best and worst selves here is key to bringing those back.”

This work was supported in part by Microsoft, Google, the National Science Foundation, the Army Research Office, the U.S. Department of Defense, the Stanford Data Science Initiative, Boeing, Lightspeed, SAP and Volkswagen.

Originally published at Stanford news

Learn more:

Headshot portrait of Jurij Leskovec - Assistant Professor of Computer Science

Jurij Leskovec - Assistant Professor of Computer Science

Photo of Dr. Michael Bernstein, Associate Professor of Computer Science at Stanford University.

Michael Bernstein - Associate Professor of Computer Science

EssaysForStudent.com - Free Essays, Term Papers & Book Notes

  • / Social Issues

Essay on Internet Trolls

Internet Trolls

In the past, bullying could be found in schools, in the workplace, and in sports. Nowadays, there is a new form of bullying called Cyber Bullying. Cyber bullies are also known as Internet Trolls. Lisa Selin Davis, an internet blogger, writes about her personal experiences dealing with internet trolls. She states in her blog that all kinds of people can become trolls, according to sociologists and psychologists who have studied online behavior. Three major types of trolls are, The Moral Crusader, The Hater, and The Debunker. All three spend time and energy engaging in virtual hate.

A Moral Crusader is a person who thinks his or her way is the right way. They use their beliefs and morals to put others down. They try to convince others that their beliefs are better and superior then that of others moral beliefs. Some examples of moral crusader groups are called, “The Anti-Tobacco Lobby,” “Gun Control Lobby,” “Anti-Pornography Group,” “Pro-Life/ Pro-Choice Movement.” These are all examples of places crusaders like to project their strong opinions onto others. They are social movements which campaign around a symbolic or moral issue.

The Hater finds other users with similar views to form a group. It shares views to bond itself. They destroy and attack their target. Haters seek to attract attention by making provocative comments that are aggressive or offensive. They are usually people that greatly dislike a specific person or thing. An example of an internet hater: Susan: “You know, Kevin from Accounting is doing very well. He just bought a house in a very nice part of town.” Jane (Hater): “If he’s doing so well, why does he drive that ’89 Taurus.”? Hater’s comments are usually brought on by jealousy.

IMAGES

  1. How Trolls Are Ruining The Internet

    essay about internet trolls

  2. Who Are Internet Trolls in Real Life and How to Deal with Them

    essay about internet trolls

  3. Internet Trolling, Its Impact and Suggested Solutions

    essay about internet trolls

  4. Internet Trolling, Its Impact and Suggested Solutions

    essay about internet trolls

  5. Internet Trolls- Everything You Need to Know

    essay about internet trolls

  6. Internet Trolling

    essay about internet trolls

VIDEO

  1. Quotations about internet

  2. Story of my first internet troll

  3. Internet TROLLS #trolls #internet #podcast

  4. Internet Trolls #youtubeshorts

  5. The Trolls Of The Internet

  6. TROLLS WHO TROLL THE INTERNET

COMMENTS

  1. Internet Trolling, Its Impact and Suggested Solutions Essay

    Trolling occurs in various spheres and divergent types of media sources. Trolls may leave false negative comments of recommendation which deceive people, or they may foster the users and make their lives unbearable. With the advancement of technologies, there is an urgent need to develop people's security from trolls.

  2. How Trolls Are Ruining The Internet

    The name internet trolls came from a fishing method online thieves use to find victims. Find out how trolling is becoming a political fight.

  3. It Wasn't Just the Trolls: Early Internet Culture, "Fun," and the Fires

    When considering the extremist turn of the Trump-era internet, it is critical to interrogate the influence of subcultural trolling on and around 4chan from 2008 to 2012. ... It Wasn't Just the Trolls: Early Internet Culture, "Fun," and the Fires of Exclusionary Laughter. ... In an essay ultimately published in Zizi Papacharissi's A ...

  4. I Wrote this Paper for the Lulz: the Ethics of Internet Trolling

    Over the last decade, research on derogatory communication has focused on ordinary speech contexts and the use of conventional pejoratives, like slurs. However, the use of social media has given rise to a new type of derogatory behavior that theorists have yet to address: internet trolling. Trolls make online utterances aiming to frustrate and offend other internet users. Their ultimate goal ...

  5. Feeding the Trolling: Understanding and Mitigating Online Trolling

    These entities include: troll(s), target(s), a medium of exchange, audience(s), other trolls, trolling artifacts, regulators, revenue streams, and assistants. Some of these actors (i.e., troll, target, medium) are playing a role in initiating, and other actors are (un)intentionally sustaining trolling by celebrating it, boosting it ...

  6. 7 What's Wrong with Trolling?

    Internet trolls are the bane of our online lives. They distract and annoy us by making provocative, and sometimes cruel, comments that derail our debates. They sour the atmosphere by making the forum, website, or online community we are using an unpleasant place to be. Trolls often act anonymously, using online identities that cannot easily be ...

  7. Yes, anyone can become an Internet troll

    Under the right circumstances, just about anybody can become an Internet troll, according to Stanford research. (Image credit: wildpixel / Getty Images) The common assumption is that people who ...

  8. Internet Trolls and the Ones Who Love Them

    The fact that Yiannopoulos clearly feels entitled to such exposure is just further proof that his position is virtually untenable. It was untenable before the advent of the internet, and it's untenable now. His banning is akin to a troll being banned from any typical online forum for whatever reason.

  9. Troll story: The dark tetrad and online trolling revisited with a

    Internet trolling is considered a negative form of online interaction that can have detrimental effects on people's well-being. This pre-registered, experimental study had three aims: first, to replicate the association between internet users' online trolling behavior and the Dark Tetrad of personality (Machiavellianism, narcissism, psychopathy, and sadism) established in prior research ...

  10. The social phenomenon of trolling: understanding the discourse and

    Investigating how trolls behave in speci c settings has also contributed to the broad- ening de nition of trolling. Early de nitions used a shing metaphor, describing trolling

  11. The Future of Free Speech, Trolls, Anonymity and Fake News Online

    And news organizations documented how foreign trolls bombarded U.S. social media with fake news. A December 2016 Pew Research Center study found that about two-in-three U.S. adults (64%) say fabricated news stories cause a great deal of confusion about the basic facts of current issues and events.

  12. Scrolls, Trolls, and Rickrolls: The Crisis of Online Harassment

    What a brief history of web harassment can tell us about the contrarian crisis in op-ed land. If you noodled around the internet long enough in the 1980s, sooner or later you'd get scrolled. It ...

  13. Trolling as a Collective Form of Harassment: An Inductive Study of How

    Likewise, if trolling is a form of society's feces, studying the content trolls adopt, the jokes trolls make, and groups trolls most frequently target provides insights into the cultural environment of society (Phillips, 2015, p. 143). Trolling as feces, as a byproduct of society's cultural production, frames trolling as an outcome.

  14. Traits of a troll: BYU research examines motives of internet trolling

    New BYU research recently published in the journal of Social Media and Society sheds light on the motives and personality characteristics of internet trolls. Through an online survey completed by over 400 Reddit users, the study found that individuals with dark triad personality traits (narcissism, Machiavellianism, psychopathy) combined with ...

  15. Essay on trolling

    Scott McLemee interviews a scholar who tracks online trolls. Last week, the journal First Monday - a prominent venue for scholarly research concerning the Internet - published a paper called "LOLing at tragedy: Facebook trolls, memorial pages, and resistance to grief online.". The author, Whitney Phillips, is a graduate student in ...

  16. Internet trolling

    Internet troll An Internet troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into an emotional response, or of otherwise disrupting ...

  17. Personality and internet trolling: a validation study of a ...

    To date, characteristics of the internet "troll" have largely been explored in general community samples, which may lack representation of the sample of interest. In this brief report, we aimed to evidence the role of gender and the personality traits of sadism, psychopathy, extraversion, conscientiousness, and agreeableness in a sample of individuals who self-report having perpetrating ...

  18. Persuasive Essay On Internet Trolling

    Persuasive Essay On Internet Trolling. When posting something on social media, there is always that person who will disagree. Disagreeing is one thing but it leading to trolling and putting one's life at risk is something serious. Millions of people today, including celebrities, politicians, pop stars and regular people are being victimized ...

  19. Inside the bizarre world of internet trolls and propagandists

    Journalist Andrew Marantz spent three years embedded in the world of internet trolls and social media propagandists, seeking out the people who are propelling fringe talking points into the heart of conversation online. Go down the rabbit hole of online propaganda and misinformation— and learn we can start to make the internet less toxic.

  20. Stanford research shows that anyone can become an Internet troll

    Photo by icsnaps, Shutterstock: Under the right circumstances, just about anybody can become an Internet troll, according to Stanford research. Stanford News - February 6th, 2017 - by Taylor Kubota Internet trolls, by definition, are disruptive, combative and often unpleasant with their offensive or provocative online posts designed to disturb and upset.

  21. People with 'dark humor' more likely to engage in the 'internet's most

    People who enjoy dark humor are more likely to be trolls. A study published in the Behavioral Sciences journal in June explored the topic of dark humor and whether those proficient in it exhibit ...

  22. Essay on Internet Trolls

    Essay on Internet Trolls. Internet Trolls. In the past, bullying could be found in schools, in the workplace, and in sports. Nowadays, there is a new form of bullying called Cyber Bullying. Cyber bullies are also known as Internet Trolls. Lisa Selin Davis, an internet blogger, writes about her personal experiences dealing with internet trolls.

  23. Masks In The Movie Trolls

    In the beginning of the movie, Creek appears as the most positive, supportive, reassuring Troll in all of Troll Village, approaching everything with Zen wisdom. He comes off as calm, collected and capable. When many of the Trolls were captured by the Bergen, Chef, Creek was the first decided to be eaten by Prince Grisle.

  24. Let Megan Thee Stallion twerk for democracy in peace

    Sharing in this opinion, however, was former NFL star Dez Bryant, who also took to Elon's hellscape to opine in response to one poster's complaints about some dismissing the rally as "ghetto

  25. Stanford Research Shows that Anyone Can Become an Internet Troll

    Internet trolls, by definition, are disruptive, combative and often unpleasant with their offensive or provocative online posts designed to disturb and upset. The common assumption is that people who troll are different from the rest of us, allowing us to dismiss them and their behavior. But research from Stanford University and Cornell ...

  26. Sabrina Javellana Was a Rising Star in Florida Politics. Then the

    The morning of Feb. 5, 2021, though, she noticed an unusual one. "Hi, just wanted to let you know that somebody is sharing pictures of you online and discussing you in quite a grotesque manner ...

  27. Paris Olympics star Ilona Maher wants to shatter athlete stereotypes

    Rugby star Ilona Maher, arguably the biggest breakout athlete of the Paris Olympics, is using social media to champion body positivity and women's empowerment.