Eight in ten of more than 3.3 billion pieces of content removed from seven social media platforms are either spam, adult or explicit content, or hate speech and acts of aggression, according to a report from the Global Alliance of Responsible Media (Garm). The report includes content from Facebook, Instagram, Twitter, TikTok, Pinterest, Snap and YouTube.
Garm is a cross-industry initiative established by the World Federation of Advertisers to address harmful content. It is supported by other trade bodies such as the Association of National Advertisers, Incorporated Society of British Advertisers and the American Association of Advertising Agencies.
The organisation highlights the following progress: YouTube in removing accounts associated with hate speech and acts of aggression; Facebook in reducing prevalence in these acts on the site; Twitter in content removal.
The improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to Covid-19 disruptions affecting moderation teams, according to the report.
The report also includes a framework for advertisers to understand how well platforms are enforcing policies. The framework includes four questions:
- How safe is the platform for consumers?
- How safe is it for advertisers?
- How effective is the platform in enforcing its safety policies? And how responsible is the platform in correcting mistakes?
Another report will be released later this year and will include gaming social media platform Twitch.