[ad_1]
Google has updated its ad content policy to ban advertisers from promoting services that could be used to create deepfake pornography.Google already prevented advertisers from promoting “sexually explicit content,” something the company defines as “text, image, audio, or video of graphic sexual acts intended to arouse.” Starting May 30, that policy also now prohibits “promoting synthetic content that has been altered or generated to be sexually explicit or contain nudity.” Google Ad Managers were emailed about the change, which is designed to combat deepfake pornography ads. In a statement to 404 Media, which initially reported the change, Google said: “We have long prohibited both sexually explicit and non-consensual sexual content on our ads platforms and these policies have typically prevented the promotion of deepfake pornography services. To ensure a comprehensive approach, we are updating our ads policies to make it clear that we do not allow the promotion of these services – regardless of whether the content is sexually explicit or not.”This also applies to Publisher and Shopping Ads, 404 Media reports. Those who violate the policy could have their accounts suspended.According to its annual Ads Safety report, Google reportedly removed over 1.8 billion ads for violating its policies on sexual content in 2023.
Recommended by Our Editors
Last month, Apple removed several generative AI apps from the App Store that could be used to create nonconsensual nude images. Developers were reportedly promoting these apps via Instagram ads with taglines like “undress any girl for free” and “any clothing delete.”
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
[ad_2]