In the coming months, Meta will begin tagging images created by artificial intelligence and uploaded to Facebook, Instagram and Threads as the company prepares for the US elections. Platform users who upload realistic audio and video files generated by AI but refuse to disclose their origin will face penalties.
Meta’s global affairs president, Nick Clegg, said the measures are intended to galvanize the tech industry as AI-generated media becomes increasingly difficult to distinguish from the real thing. The company is also developing tools to detect AI-generated content even if its metadata has been edited. Meta has its own Imagine image generator, and the images it creates on the company’s platforms are already marked with the Imagined with AI watermark; In the same way, images created using tools from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock will be marked.
The technology industry, according to Mr. Clegg, is increasingly lagging behind in developing standards for identifying audio and video created by AI – Meta has to carefully monitor how they can be used for forgery, but it cannot identify all the materials on its own. The company, with the support of partners, is developing several initiatives to ensure the authenticity of content. Adobe introduced the Content Credentials system for marking AI content in metadata and when displayed on sites; Google recently introduced a new version of the SynthID watermark for audio files. Meta will soon begin requiring its users to disclose information about AI-generated realistic video and audio recordings, Clegg added. Those who do not do this will face consequences: from a warning to removal of the publication with such content.
There will be presidential elections in the United States this year, and in anticipation of this event, many viral posts created by AI on behalf of politicians have already appeared. But, according to Clegg, such materials either have no political weight or are quickly discovered by the administration of the platforms. Meta is also starting to test large language models trained according to community standards – they will act as an auxiliary tool for human moderators.