Below Supernav ↴

TikTok to add ‘distressing content’ warning when users search for certain terms

A policy director with TikTok announced the decision Tuesday, explaining that the warning will appear
“when a user searches for terms that may bring up content that some may find distressing.” (Sean Gallup/Getty Images)

 

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

(NEXSTAR) – TikTok is hoping to shield users from “distressing content” with a new warning page.

A policy director with TikTok announced the decision Tuesday, explaining that the warning will appear
“when a user searches for terms that may bring up content that some may find distressing.” The notice will appear over the pages containing the search results, temporarily blocking the content from view until users choose to opt in by clicking “show results.”

In a press release, TikTok offered “scary makeup” as a search term that would prompt such a warning, but did not elaborate on how it would determine or categorize potentially distressing search terms. When contacted, a representative for the platform offered additional examples including “blood” and “sfx makeup” as searches that would cue the warning.

Similar warning screens already appear on TikTok videos that are deemed to contain sensitive content. TikTok’s decision to extend the warnings to search terms is one of the company’s latest efforts to “keep TikTok a safe space,” according to the press release.

The “distressing content” warnings for search terms will begin appearing this month, TIkTok confirmed.

TikTok also announced on Tuesday that it would be offering additional resources to support users who may be dealing with suicidal thoughts or eating disorders. Specifically, TikTok pointed to its new well-being guide available in its Safety Center, alongside information and tips for communicating with persons who may be in need of help.

News of TikTok’s new policy comes a week after the Wall Street Journal conducted a test of the app’s “For You” feature, which suggests content based on a user’s activity. The outlet’s study found that the app’s algorithm suggested videos containing sexual and drug-related content, even to accounts registered to teens as young as 13.

A TikTok spokeswoman told the Journal that some of the content was not in violation of the platform’s policies, but confirmed that some of the videos had since been removed.

Videos that violate TikTok’s dangerous acts policy, meanwhile, are outright banned from the platform. Such videos include anything that depicts, normalizes or promotes suicide, self-harm, eating disorders, dangerous acts, bullying, nudity, or harmful activities by or against minors, among other types of harmful content.

Last month, TikTok had even removed videos of people partaking in social media’s “milk crate challenge,” in which users attempted to ascend and descend makeshift staircases made of unsecured milk crates. At the time, the trend had been called out by local police and health departments across the country — and even the FDA — as being dangerous.

Tech

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴