Social Media’s Dark Side: How Graphic Videos of Charlie Kirk and Others Expose Content Moderation Failures

Graphic videos of deaths circulate on social media, sparking calls for better content moderation. Platforms face scrutiny.
Charlie Kirk stands with a serious expression in front of a Turning Point USA banner Charlie Kirk stands with a serious expression in front of a Turning Point USA banner
The founder of Turning Point USA, Charlie Kirk, is shown during a speaking engagement at Texas State University in San Marcos, Texas. By Carrington Tatum / Shutterstock.com.

Executive Summary

  • Graphic videos depicting recent deaths, such as those of Charlie Kirk and Iryna Zarutska, are rapidly circulating on major social media platforms, intensifying calls for enhanced content moderation.
  • Despite some efforts by companies like TikTok, Meta, and YouTube to remove or restrict graphic content and apply warnings, explicit videos remain easily accessible, often through autoplay features and search suggestions, even for younger users.
  • Online safety advocates criticize current moderation as insufficient, and medical experts warn of “vicarious trauma” and negative mental health impacts from unfiltered exposure to traumatic events on social media.
  • The Story So Far

  • The rapid circulation of graphic videos on social media is exacerbated by some tech companies reducing their content moderation teams, alongside platform policies that often permit certain violent content and utilize default autoplay features. This allows explicit material to spread quickly and expose users, including younger audiences, without the filtering or blurring standards typically applied by traditional media outlets.
  • Why This Matters

  • The rapid and widespread circulation of graphic death videos on major social media platforms, despite existing moderation policies and content warnings, highlights the ongoing inadequacy of tech companies’ content moderation efforts, particularly concerning autoplay features and search suggestions. This failure exposes users, including younger audiences, to unfiltered traumatic content, raising significant mental health concerns like vicarious trauma, and intensifying public and expert scrutiny on platforms’ responsibility and the need for more robust safeguards.
  • Who Thinks What?

  • Social media platforms, including TikTok, Meta (Instagram, Facebook), and YouTube, state they are actively enforcing community guidelines, removing the most graphic and rule-violating content, applying sensitive warnings, restricting access for adult accounts, and working to prevent unexpected exposure, though some distant footage may remain.
  • Online safety advocates, such as the Tech Transparency Project, argue that current content moderation measures are insufficient and failing, highlighting the easy accessibility of graphic videos for younger audiences even with safety settings enabled, and the problems posed by default autoplay features.
  • Graphic videos depicting the recent deaths of Charlie Kirk in Utah and Iryna Zarutska in North Carolina have rapidly circulated across major social media platforms, sparking renewed calls for tech companies to enhance their content moderation efforts. The widespread distribution of these explicit videos, which include footage of Kirk’s shooting and Zarutska’s stabbing, has intensified public debate over the effectiveness of current platform policies, particularly as some companies have reduced their moderation teams.

    Platform Policies and Autoplay Features

    Each social media platform maintains its own content rules, especially concerning violent material. While most platforms permit some violent content, they typically impose restrictions on particularly gory or bloody videos and limit access for younger users. Sites like X, TikTok, and Facebook commonly autoplay videos by default, a strategy designed to immediately capture user attention, unless the content has been specifically designated as restricted. YouTube also employs autoplay for non-restricted content when users hover over a video, potentially exposing individuals to highly sensitive and graphic material.

    Rapid Spread of Graphic Content

    Videos related to Kirk’s shooting continued to be pushed to user feeds days after the incident. A search for “Charlie Kirk” on Instagram by CNN yielded graphic videos among the top results. On TikTok, the app’s search page, without user prompting, suggested terms like “raw video footage” and “actual incident footage,” which directly linked to explicit videos.

    TikTok informed CNN that it was in the process of removing these suggested search terms and had been actively taking down close-up videos of Kirk’s shooting. Despite these efforts, some graphic videos, occasionally accompanied by content warnings, remained easily accessible on the platform.

    Company Responses and Mitigation Efforts

    Jamie Favazza, a spokesperson for TikTok, issued a statement affirming the company’s commitment to enforcing its Community Guidelines and implementing additional safeguards to prevent users from unexpectedly viewing rule-violating footage. Favazza clarified that not all videos of the shooting would be removed, as TikTok’s policies prohibit “gory, gruesome, disturbing, or extremely violent content,” but allow some footage, such as that captured from a distance, to remain viewable.

    Meta, the parent company of Instagram and Facebook, stated it is applying a “Mark as Sensitive” warning label to footage of the shooting and removing content that glorifies, represents, or supports the incident or perpetrator. The company also indicated it is restricting such videos to adult accounts. YouTube reported it is removing some graphic videos of the shooting, particularly those lacking context, and is actively monitoring its platform while elevating news content to help users stay informed. Representatives for X did not immediately respond to requests for comment.

    Criticism from Online Safety Advocates

    Online safety activists argue that current moderation measures are either insufficient or failing. Katie Paul, director of the Tech Transparency Project, an organization advocating for enhanced online safety for young people, highlighted the easy accessibility of Kirk’s shooting videos for younger audiences, even when teen settings and safety precautions are activated. A test Instagram account set up by the Tech Transparency Project with teen settings enabled could readily access videos of Kirk’s shooting, with an autoplay video of the incident appearing as the very first result when searching for his name.

    In response to the Tech Transparency Project’s claims, a Meta spokesperson acknowledged that there could be a temporary lag in applying warning screens when slightly different versions of known videos are uploaded. The spokesperson clarified that this issue was unrelated to specific teen account settings.

    Broader Implications and Mental Health Concerns

    Unlike mainstream media outlets, which typically adhere to strict standards by editing or blurring graphic content to protect viewers and victims, social media platforms lack such broad, universally enforced standards. This is particularly significant given that more than half of U.S. adults report receiving at least some of their news from social media. Medical experts caution that exposure to traumatic and graphic content can lead to “vicarious trauma,” where individuals absorb the trauma experienced by others, potentially impacting their mental and physical health.

    The ongoing proliferation of graphic content highlights a significant challenge for social media platforms as they navigate content moderation in an era of rapid information sharing. The debate underscores the tension between free expression, user safety, and the mental health implications of unfiltered exposure to traumatic events, prompting continued scrutiny of tech companies’ responsibilities and their evolving policies.

    Add a comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Secret Link