Google, Meta and OpenAl announce measures to identify content created with artificial intelligence

Big tech companies test labels to detect fake material and combat misinformation against the risk of deepfakes

Google, Meta, and OpenAl have announced new measures to make it easier to identify images or files that have been produced or edited with artificial intelligence (AI). This initiative is aimed at preventing the spread of false messages that could influence election results or have other unexpected consequences. These companies have previously announced plans to prevent the use of Al in the 2024 electoral processes. They have joined the Coalition for Content Provenance and Authenticity (C2PA), which proposes a standard certificate and includes many digital industry players, media outlets, banks, and camera manufacturers. However, they acknowledge there is no single, universally effective solution for identifying Algenerated content. The initiatives range from visible brands in the images themselves to hidden messages in the file metadata or artificially generated pixels. Google also claims to have found a way to identify audios with its betaphase tool, SynthID

The companies have not given clear timelines for the full implementation of their measures, but they recognize the need for proactive protection, specially 2024, a year full of important ellections, approaches

QUIZÁS TE INTERESE…

Scroll al inicio