BREITBART
Mark Zuckerberg’s Meta, the parent company of Facebook and Instagram, is claiming to address the potential misuse of AI during the upcoming 2024 presidential election through a new system to label AI-generated videos, images, and audio posted on its platforms.
The New York Post reports that Meta Vice President of Content Policy Monika Bickert revealed in a blog post on Friday that the company will start applying “Made with AI” labels to AI-generated videos, images, and audio posted on its platforms starting in May. This new policy is an expansion of the company’s previous approach, which only addressed a narrow set of doctored videos.
In addition to the “Made with AI” labels, Meta will also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether the content was created using AI or other tools. This shift in approach marks a significant change in how Meta handles manipulated content, moving from a focus on removing a limited set of posts to keeping the content up while providing viewers with information about how it was made.
The new labeling approach will apply to content posted on Meta’s Facebook, Instagram, and Threads platforms, while other services like WhatsApp and Quest virtual reality headsets are covered by different rules. Meta will begin applying the more prominent “high-risk” labels immediately, according to a company spokesperson.