Executive Summary
The Story So Far
Why This Matters
Who Thinks What?
Graphic videos depicting the recent deaths of Charlie Kirk in Utah and Iryna Zarutska in North Carolina have rapidly circulated across major social media platforms, sparking renewed calls for tech companies to enhance their content moderation efforts. The widespread distribution of these explicit videos, which include footage of Kirk’s shooting and Zarutska’s stabbing, has intensified public debate over the effectiveness of current platform policies, particularly as some companies have reduced their moderation teams.
Platform Policies and Autoplay Features
Each social media platform maintains its own content rules, especially concerning violent material. While most platforms permit some violent content, they typically impose restrictions on particularly gory or bloody videos and limit access for younger users. Sites like X, TikTok, and Facebook commonly autoplay videos by default, a strategy designed to immediately capture user attention, unless the content has been specifically designated as restricted. YouTube also employs autoplay for non-restricted content when users hover over a video, potentially exposing individuals to highly sensitive and graphic material.
Rapid Spread of Graphic Content
Videos related to Kirk’s shooting continued to be pushed to user feeds days after the incident. A search for “Charlie Kirk” on Instagram by CNN yielded graphic videos among the top results. On TikTok, the app’s search page, without user prompting, suggested terms like “raw video footage” and “actual incident footage,” which directly linked to explicit videos.
TikTok informed CNN that it was in the process of removing these suggested search terms and had been actively taking down close-up videos of Kirk’s shooting. Despite these efforts, some graphic videos, occasionally accompanied by content warnings, remained easily accessible on the platform.
Company Responses and Mitigation Efforts
Jamie Favazza, a spokesperson for TikTok, issued a statement affirming the company’s commitment to enforcing its Community Guidelines and implementing additional safeguards to prevent users from unexpectedly viewing rule-violating footage. Favazza clarified that not all videos of the shooting would be removed, as TikTok’s policies prohibit “gory, gruesome, disturbing, or extremely violent content,” but allow some footage, such as that captured from a distance, to remain viewable.
Meta, the parent company of Instagram and Facebook, stated it is applying a “Mark as Sensitive” warning label to footage of the shooting and removing content that glorifies, represents, or supports the incident or perpetrator. The company also indicated it is restricting such videos to adult accounts. YouTube reported it is removing some graphic videos of the shooting, particularly those lacking context, and is actively monitoring its platform while elevating news content to help users stay informed. Representatives for X did not immediately respond to requests for comment.
Criticism from Online Safety Advocates
Online safety activists argue that current moderation measures are either insufficient or failing. Katie Paul, director of the Tech Transparency Project, an organization advocating for enhanced online safety for young people, highlighted the easy accessibility of Kirk’s shooting videos for younger audiences, even when teen settings and safety precautions are activated. A test Instagram account set up by the Tech Transparency Project with teen settings enabled could readily access videos of Kirk’s shooting, with an autoplay video of the incident appearing as the very first result when searching for his name.
In response to the Tech Transparency Project’s claims, a Meta spokesperson acknowledged that there could be a temporary lag in applying warning screens when slightly different versions of known videos are uploaded. The spokesperson clarified that this issue was unrelated to specific teen account settings.
Broader Implications and Mental Health Concerns
Unlike mainstream media outlets, which typically adhere to strict standards by editing or blurring graphic content to protect viewers and victims, social media platforms lack such broad, universally enforced standards. This is particularly significant given that more than half of U.S. adults report receiving at least some of their news from social media. Medical experts caution that exposure to traumatic and graphic content can lead to “vicarious trauma,” where individuals absorb the trauma experienced by others, potentially impacting their mental and physical health.
The ongoing proliferation of graphic content highlights a significant challenge for social media platforms as they navigate content moderation in an era of rapid information sharing. The debate underscores the tension between free expression, user safety, and the mental health implications of unfiltered exposure to traumatic events, prompting continued scrutiny of tech companies’ responsibilities and their evolving policies.