TikTok deletes 450,000 videos in Kenya for policy violations in first quarter of 2025

TikTok deletes 450,000 videos in Kenya for policy violations in first quarter of 2025

Over 43,000 accounts in Kenya were banned for engaging in activities that violated the platform’s rules, including impersonation, spreading harmful content or being operated by users under the age of 13.

A total of 450,000 short videos were pulled down from TikTok in Kenya between January and March 2025, for breaching the platform’s community guidelines, the social media platform has revealed.

According to TikTok’s Quarter One 2025 Community Guidelines Enforcement report, the removals were part of a global enforcement campaign targeting content that violated its standards around integrity and authenticity, safety and civility, privacy and security, mental and behavioural health, regulated goods and commercial activity, as well as sensitive and mature themes.

The platform noted that 92.1 per cent of the videos taken down were removed before they were viewed, while 94.3 per cent were taken down within 24 hours of posting.

Additionally, over 43,000 accounts in Kenya were banned for engaging in activities that violated the platform’s rules, including impersonation, spreading harmful content or being operated by users under the age of 13.

The report also shows that 6,467,926 accounts were removed globally within the same period for similar violations, including policy breaches, impersonation, or underage usage.

Across all markets, TikTok removed over 211 million videos, with more than 187 million of those flagged and taken down by automated moderation systems. However, 7.5 million videos were later reinstated after review.

During the review, the platform also shut down 19,161,569 LIVE sessions, with 1,271,228 of them restored. It noted that it recorded an increase in content removals due to the improved use of automation and a shift towards more proactive moderation systems.

In terms of user interaction, TikTok removed over 1.1 billion comments for violating community standards. It also took action against fake engagement by removing over 4 billion fake likes, blocking more than 6 billion fake like attempts and preventing over 8 billion fake follow requests.

In the same quarter, the platform blocked more than 146 million fake accounts and 199 million fake followers, highlighting its efforts to preserve authenticity on the platform.

TikTok also noted that over 99 per cent of violating content globally was removed before being reported, and over 90 per cent before receiving any views.

“The vast majority of violations (94 per cent) were removed within 24 hours. This was also a quarter where automated moderation technologies removed more violative content than ever, over 87 per cent of all video removals. In addition, TikTok’s moderation technologies helped identify violative livestreamed content faster and more consistently,” TikTok said.

The company also revealed that it is piloting large language models (LLMs) to support content moderation at scale, particularly in enforcing rules for comments.

“LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with high degrees of precision, consistency, and speed,” the platform said.

The platform added that these AI-driven moderation systems are designed not only to improve accuracy but also to support the well-being of human moderators by reducing the amount of harmful content they have to review manually.

“By integrating advanced automated moderation technologies with the expertise of thousands of trust and safety professionals, TikTok enables faster and consistent removal of content that violates our Community Guidelines. This approach is vital in mitigating the damaging effects of misinformation, hate speech, and violent material on the platform. With a proactive detection rate now at 99 per cent globally, TikTok is more efficient than ever at addressing harmful content before users encounter it,” the company said.

TikTok also emphasised that moderation of LIVE content remains a top priority. The number of LIVE rooms terminated in Q1 2025 rose by 50 per cent compared to the previous quarter.

“This increase shows how effective TikTok’s prioritisation of moderation accuracy has been, as the number of appeals remains steady amid the increase in automated moderation,” reads the report.

The platform also revealed that it has partnered with Childline Kenya to provide in-app access to local helplines for users who report content related to suicide, self-harm, hate, or harassment. The services offered include free psychological support, counselling and emergency assistance to affected users.

Further, TikTok announced a partnership with Mental360 in June 2025 to create locally relevant, evidence-based content aimed at promoting open conversations, reducing stigma and increasing awareness around mental health in Kenya.

TikTok said the new mental health initiatives come at a critical time, when access to support systems is increasingly important for users, especially the youth, navigating social media platforms.

Reader Comments

Trending

Latest Stories

Popular Stories This Week

Stay ahead of the news! Click ‘Yes, Thanks’ to receive breaking stories and exclusive updates directly to your device. Be the first to know what’s happening.