Cancel Preloader

TikTok Will Start Watermarking AI Content to Warn Users

 TikTok Will Start Watermarking AI Content to Warn Users

TikTok will start working with a digital watermark system called Content Credentials to help identify more photos, videos and audio-only content created using artificial intelligence tools from the likes of Microsoft, Adobe and OpenAI.

The Chinese social networking giant, which is fighting a potential ban in the US over national security concerns, already labels AI-generated content made using TikTok’s AI effects tools. The company said its new moves will be part of an expanding effort to fight disinformation and misinformation. 

AI Atlas art badge tag AI Atlas art badge tag

“AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated,” TikTok said in a statement. “Labeling helps make that context clear.”

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

TikTok’s moves mark another way the tech industry is responding to growing concerns about the pervasiveness of AI-generated content. Over the past couple of years in particular, AI technology that creates text, videos and audio has become much easier to use. (Check out CNET’s hands-on reviews of AI image-generating tools like Google’s ImageFX, Adobe Firefly and OpenAI’s Dall-E 3 as well as more AI tips, explainers and news on our AI Atlas resource page.)

At the same time, AI content has become much more believable as well. Media and information experts have warned that these converging trends could create significant risks to the public, with realistic lies and misinformation flooding the internet. 

The concerns aren’t just theoretical. Earlier this year, a political consultant made mass-scale robocalls using an AI-powered re-creation of US President Joe Biden’s voice. In that case, the very real sounding AI recording encouraged people in New Hampshire not to vote in the primary election. 

Experts believe it’s just the beginning of where AI disinformation is likely headed, particularly with the upcoming 2024 presidential election.

TikTok isn’t the only social media company working to identify AI-powered posts. Last month, Facebook and Instagram owner Meta announced plans to label video, audio and images as “Made with AI” either when its systems detect AI involvement, regardless of whether creators disclose that information during an upload. 

Google’s YouTube has also required disclosure of AI-manipulated videos from creators, citing examples including “realistic” likenesses of people or scenes, as well as altered footage of real events or places. OpenAI has also said it will add AI-identify data to all images generated using its systems.

So far, it appears users broadly appreciate these efforts. Last month, Meta cited a study in which 82% of more than 23,000 respondents from across 13 countries favored labels on AI-generated content “that depicts people saying things they did not say.” 

TikTok said it plans to expand its AI-labeling efforts, including by adding data to photos or videos created using AI tools on its platform that people might download and share elsewhere. 

“As AI evolves, we continue to invest in combating harmful AI-generated content by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions,” the company added. 

Read more: AI Atlas: Your Guide to Today’s Artificial Intelligence

Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.

Related post

Leave a Reply

Your email address will not be published. Required fields are marked *