[ad_1]

Google is joining the growing number of companies standing up to sexually explicit deepfakes.

The Alphabet division has made it easier for users to report nonconsensual imagery found in search results, including those made by artificial intelligence tools. While it was previously possible for users to request the removal of these images prior to the update, under the new policy whenever that request is granted, the company will scan for duplicates of the nonconsensual image and remove those as well. Google will also attempt to filter all explicit results on similar searches.

“With every new technology advancement, there are new opportunities to help people—but also new forms of abuse that we need to combat,” product manager Emma Higham wrote in a blog post. “As generative imagery technology has continued to improve in recent years, there has been a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent.”

Google has also changed its ranking system, lowering explicit deepfake content in general. Even direct searches for explicit deepfakes will bypass the user request and instead return “high-quality, non-explicit content—like relevant news articles—when it’s available,” the company wrote.

Websites that have a high number of pages removed from search under these policies will be demoted in the search algorithm as well, making them much more difficult for anyone to find. Google says this approach has worked well for other types of harmful content.

Google’s change to its search engine comes just one day after Microsoft called on Congress to create a “deepfake fraud statute” to combat AI fraud in both images and voice replication, and about one week after Meta’s oversight board said the social media giant fell short in its response to a pair of high-profile explicit, AI-generated images of female public figures on its sites.

The U.S. government has taken a number of steps to curb deepfakes already. Recently, the Senate passed a bill that would allow victims of sexually explicit deepfake images to sue the creator of those for damages. And the FCC has banned robocalls with AI-generated voices, which have been on the increase over the past year, especially in the political arena.

They continue to propagate, however, and Google acknowledged that even with today’s changes to its Search policy, they will continue to pop up.

“There’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” Higham wrote in the blog post. “And given that this challenge goes beyond search engines, we’ll continue investing in industry-wide partnerships and expert engagement to tackle it as a society.”

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *