In response to concerns surrounding deepfakes, Meta has announced a new approach to handling manipulated images and audio on its platforms. Instead of deleting such content, Meta will employ labeling and contextualization methods to provide users with information about the authenticity of the media. This decision comes amid preparations for upcoming elections in various countries, with Meta acknowledging the challenge of distinguishing between machine-generated content and reality. To address this issue, Meta has begun labeling AI-generated photos uploaded to Facebook, Instagram, and Threads. Additionally, the White House’s recommendation to watermark AI-generated media has prompted Meta to develop tools to detect synthetic content. Meta has already implemented a watermark indicating that an image was “Imagined with AI” and plans to extend this feature to other AI-generated content from various providers. In a blog post, Meta encouraged users to consider several factors when evaluating AI-generated content, emphasizing the importance of assessing the trustworthiness of the account and the authenticity of the content.