The Dark Side of Social Media: How Harmful Content is Targeting Teens
In a world where social media platforms dominate our daily lives, the issue of harmful content being recommended to teenagers has come to the forefront. Cai, a young man who experienced disturbing material on his social media feeds, shared his story with the BBC.
Cai recalls scrolling through his phone at the age of 16 when he was suddenly bombarded with violent and misogynistic videos. He questioned why he was being targeted with such content and felt helpless in trying to avoid it. Andrew Kaung, a former analyst at TikTok, shed light on the issue by revealing how algorithms were pushing harmful content to teenage boys, while teenage girls were recommended different content based on their interests.
The use of AI tools by social media companies to moderate content has its limitations, as not all harmful material can be identified. Andrew Kaung raised concerns about the lack of action taken by companies like TikTok and Meta (owner of Instagram) to address the issue, leaving young users like Cai at risk.
Despite efforts by social media companies to invest in safety measures, Cai continued to be exposed to violent and misogynistic content. His attempts to manipulate the algorithms by indicating his disinterest in such content proved futile. Cai’s friend even fell victim to the influence of controversial influencers, adopting misogynistic views as a result.
The algorithms driving content recommendations on platforms like TikTok prioritize engagement, leading to the proliferation of harmful content. Andrew Kaung emphasized the need for clearer moderation systems and specialized moderators to tackle extreme content. However, his suggestions were rejected by TikTok at the time.
As the debate on social media regulation intensifies, Cai advocates for better tools that allow users to control their content preferences. He believes that social media companies prioritize profit over user safety and calls for more respect for users’ opinions.
In the UK, new regulations will hold social media firms accountable for verifying children’s ages and preventing the recommendation of harmful content to young users. Ofcom, the UK media regulator, will enforce these measures, aiming to protect children from the negative impacts of harmful content online.
The battle against harmful content on social media continues, with teenagers like Cai at the forefront of the fight for a safer online environment. As social media companies face increasing scrutiny, the need for effective moderation and user control tools becomes more pressing.