gaming communities control hate speech

When University of Kansas researchers published a study that examined how much the online casino, video gaming community, Grande Vegas online casino analysts, esports ecosystem and other sectors of the gaming community have to teach social media giants about regulating hate speech, they couldn’t have anticipated that, less than two months later, the issue would be on the front page of every major media publication worldwide. But that’s what happened when a whistleblower presented evidence to the U.S. Congress that showed conclusively that Facebook was not only failing to control hate speech but was actually encouraging it in order to increase engagement – and increase their profits.

Up until that point, most observers assumed that social media platforms such as Facebook were actively looking for solutions to the problem of hate speech. However, whistleblower Frances Haugen was able to bring documents and other proof that such is not the case. Facebook created algorithms that would specifically boost posts with negative speech over positive speech.

Now, it seems that Facebook, which was  embarrassed by those secret policies having been made public, is likely to start listening more closely to research based recommendations  about how to reduce hate speech. One of the first pieces of hard evidence that should be crossing FB execs’ desks is the August 2021 report by University of Kansas researchers that gives guidance as to how, by learning from the gaming community,  it  and other social media platforms do more to curb hate speech on their platforms.

Content Management

The University of Kansas Report, Cyber-Recapitulation? What Online Game Can Teach Social Media About Content Managementprovides a comprehensive summary how social media can and should look to the experiences and lessons of online gaming to more effectively regulate speech on their platforms.

Harrison Rosenthal and Genelle Belmas are the article’s co-authors. Their goal in publishing their report involved approaching the problem of hate speech on social media by exploring how the gaming community has been navigating similar issues over the last 10 years. The article was published in the Jurimetrics journal of law and technology which  American Bar Association, that traces gaming’s evolution to social media and connects it to today’s struggles with negative interactions, bullying, threatening speech and expressions of racism and intolerance on social media.


According to the article, social media evolved from game playing. The gaming community has grown as a place where people communicate for a common goal. Even though social media sites aren’t based on gameplay, they are, in fact, kind of a game of their own. People post with the goal of accumulating “likes,” “retweets,” “shares,” “comments” and other types of engaging responses.

Over time, the gaming world developed a community-based approach that saw users themselves control what is acceptable and set certain standards. Social media, however, continues to struggle to find the correct approach as algorithms and execs decide what is acceptable and what isn’t.

That’s social media’s dilemma says Rosenthal, a lawyer who is currently pursuing a PhD in journalism and mass communications. “Over time, the gaming world morphed from people caring mainly about the rules and outcomes of the game to being more about being online and interacting with people. Our argument is that in social media your representation, whether you like it or not, is an avatar. Speech is regulated in many contexts, but the way it is regulated is wildly misunderstood. People come to social media with a fundamental misunderstanding of their rights.”

Genelle Belmas, the article’s co-author is an associate professor of journalism and mass communications at the University of Kansas. Belmas became interested in the subject of how the gaming community polices hate speech because, as an avid gamer herself, she noted how a friend was given the title of “sentinel” in an online game by peers which gave the “sentinel” the authority to intervene is there was abusive behavior going on during the game.

“He was empowered to pull people out of the game and talk to them about how they played and treated other players,” Belmas explained. “He was empowered to make regulatory decisions, and that system in which sentinels, or others that have guilds or users who make bottom-up decisions, work well, and social media could benefit from the same approach.”

User Regulation

Having trusted users regulate content for quality and appropriateness on the web is nothing new. Rosenthal and Belmas point out that sites like Wikipedia and Reddit where such user oversight is built into the platform is accepted by users and administrators alike.

The approach works because, when something untoward appears online, it’s important that immediate action be taken and algorithms don’t always accomplish the desired results. Facebook’s “no nudity” rule resulted in pictures of nursing mothers being removed because they were deemed “inappropriate” which, in itself, was highly offensive to a large part of the Facebook community.

In addition, many of the decisions about what is and is not appropriate are made by people who live in other areas of the world. That means that what may be acceptable to them is unacceptable in a market like the United States and vice versa. By relying on users, there would be fewer incidents of misunderstandings relating to what is and isn’t acceptable. Belmas also notes that

relying on users to moderate content ensures that there is no financial incentive – a point emphasized by the recent Facebook fiasco.  “Social media companies will always capitulate if it serves their bottom line,” said Belmas. “The question is to what degree does speech give way to money, and the answer is always, unless you use the model in which users have the power.”


The tide is swelling as lawmakers, social activists and media personalities call for a change in what social media organizations are allowed and now allowed to allow. The question that many ask is, can a community be trusted to regulate what is and isn’t accepted in online posts?

If the experiences of the gaming community are any indication, their solution could help to solve the problem. “Whether or not we like it, social media companies are getting more powerful, and the political will is that something needs to be done,” said Belmas. “One of the best approaches we can see is a user-generated, bottom-up approach. In such a model, social media companies are not giving up power. They’re redistributing it.”

Back to Overview Separator
SIGN UP Play Now