Despite pledges to crack down on hateful content, TikTok has a long way to go to improve their practices and protect its users from harm, according to new research by the Institute for Strategic Dialogue (ISD).
Aiming to provide an in-depth analysis on the state of extremism and hate on TikTok, the report ‘Hatescape: An In-Depth Analysis of Extremism and Hate Speech on TikTok‘ examines how extremists use profiles, hashtags, music and other effects on the social media platform.
Researchers examined a sample of more than 1,000 videos from 491 accounts, or about eight hours of content, that seemingly violated TikTok’s community guidelines.
The counter-extremism think tank identified:
- 312 videos promoting white supremacy,
- 246 videos expressing support for organizations or individuals known to be extremists or terrorists,
- at least 58 videos including misogynist content and
- 90 videos expressing anti-LGBTQ+ sentiments.
Black people were also frequent targets of hate, with some videos mocking or celebrating the death of George Floyd, who was murdered by a white police officer in Minneapolis in 2020.
Over 1,000 videos featuring hate speech and extremist views were uploaded to TikTok.
The most-viewed video in the sample had 2 million views. This video featured anti-Asian hatred linked to COVID-19, while three of the top ten most-viewed videos, with a collective 3.5 million views, featured content first produced by jailed white supremacist Paul Miller.
The average number of views each video in the sample received was 13,300. However, this is skewed by a small cluster of videos with large view numbers, and 760 videos received less than this average figure. 15 videos received over 100,000 views, while 168 videos received less than 100 views.
Misogynistic Content Thrives in the Platform
Misogynistic videos were also captured in the sample. In particular, a number of videos were posted by accounts promoting Men Going Their Own Way, which is a social movement that is part of the wider manosphere, a collection of mostly online communities that are marked by their overt and extreme misogyny and rejection of feminism which they believe has come to dominate society at the expense of men.
At the time of writing the report, the Institute found that 81.5% of the extremist videos they identified on the platform were still live.
Other videos combined misogyny with other forms of hate, such as white supremacy. Here, videos featured clips or screenshots from news reports showing white women in mixed race relationships and presenting them as white race traitors.
But the Engagement on Hateful Content is Low
But low engagement figures on hateful and extremist content are a positive sign of the lack of mainstream engagement achieved by these kinds of creators, ISD notes.
Of the 1,030 videos examined:
• 36 videos (3.5% of the full sample) received 0 likes, 266 videos (26% of the full sample) received 0 comments and 327 videos (32% of the full sample) received 0 shares.
• 976 videos (95%) received at least one like and 352 videos (34%) received over 100 likes. Due to removal of content or archiving errors, data on the number of likes for 18 videos was not accessible.
• 740 videos (72%) received at least one comment and 241 (23%) received over 25 comments. Due to removal of content or archiving errors, data on the number of likes for 24 videos was not accessible.
• 597 videos (58%) received at least one share and 134 videos (13%) received over 50 shares. Due to removal of content or archiving errors, data on the number of likes for 106 videos was not accessible.
Content Moderation on TikTok
ISD’s research demonstrates that TikTok has a content moderation problem: extremist and even terrorist related footage is easily discoverable on TikTok, despite contravening the platform’s terms of service.
Moreover, at the time of writing the report, the Institute found that 81.5% of the extremist videos they identified on the platform were still live.
ISD is recommending that TikTok improve its understanding of how creators spread extremist content and develop more nuanced policies that go beyond straight hashtag bans. The report also notes that TikTok’s interface is “severely limited in the data it provides to researchers or the public,” and suggests improvements to search functionality and greater transparency on how its algorithm works.