Tech

YouTube launches content labeling tool to fight misinformation

It is now common practice for Africans to use AI-generated videos to spread misinformation. One such instance is a video purportedly endorsing a pain relief cream, with Chinonso Egemba, popularly known as “Aproko Doctor,” in it. The video also features a TVC reporter who does the same.

However, the video was later discovered to be a compilation of old recordings, with the audio and mouth movements altered to give the impression of authenticating the product’s effectiveness.

A similar instance involves a manipulated video of Kayode Okikiolu, a Channels TV reporter, by scammers who forced him to endorse various products to deceive Nigerians. These videos were said to be extracted from legitimate news reports and manipulated to give the impression that he was endorsing health products or video games.

YouTube has set out to combat this with the introduction of this tool. Now, there will be fewer instances of manipulated videos. This is in a bid to fight misinformation. The tool also promotes trust between users and creators through transparency.

According to the Google-owned video-sharing platform, when creators upload content, they will be given new options to specify whether it contains realistic altered or synthetic material.

The new feature applies to content, including “digitally altering content to replace the face of one individual with another or synthetically generating a person’s voice to narrate a video, manipulating footage of real events or places, and generating realistic scenes.”

YouTube, meanwhile, announced that creators will not be required to disclose when they use generative artificial intelligence (AI) for productivity purposes such as script generation, content ideas, or automatic captioning.

It also stated that when it comes to situations of clearly unrealistic content, color adjustment or lighting filters, special effects like background blur or vintage effects, beauty filters, or other visual enhancements, creators will not be required to disclose when synthetic media is unrealistic and/or the changes are insignificant.

Big tech companies have taken measures to address the growing issue of AI-generated misinformation. For instance, Meta announced in January 2024 that it was “working with industry partners on common technical standards for identifying AI content, including video and audio” to label images that users post to Facebook, Instagram, and Threads when it detects industry-standard indicators that they are AI-generated.

(Techpoint Africa)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button