[ad_1]
The US Federal Trade Commission believes AI-driven impersonation fraud is on the rise, and has proposed the expansion of its impersonation fraud rules to include individuals as well as businesses and governments.The FTC condemned impersonation fraud in its Thursday announcement and said that emerging technology like AI-generated deepfakes threaten “to turbocharge this scourge.”This week’s proposed addition comes years after the business and government impersonation fraud rules were first proposed back in 2021.Under the new rules, it would be illegal to create an AI-generated deepfake of someone for the purposes of pretending to be them to commit some type of financial fraud. Impersonated individuals would also have a path for legal recourse under the rules.”Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan in a statement. “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals,” Khan added.The FTC is also considering making it illegal for AI firms to provide AI tools that are being frequently used to commit impersonation fraud.”The Commission is also seeking comment on whether the revised rule should declare it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation,” the FTC wrote.This new regulation could give the government agency reason to punish AI firms that don’t try to ensure their tools land in the right hands. For example, last month an AI-powered robocall impersonating President Biden was made with tech from AI firm Eleven Labs. While Eleven Labs was able to identify the creator of the fake AI Biden robocalls and suspend their account, the FTC’s new rules may incentivize AI firms to be more proactive instead of reactive about suspending users generating AI content for criminal purposes, like impersonation fraud.
Recommended by Our Editors
It’s not entirely clear how AI firms could fully prevent such cases, however. The new rules could lead to AI firms demanding more personal sensitive data from users in an effort to verify their identities. In the world of crypto, for example, crypto exchange users have to upload government identification and other personal identifying information to pass Know-Your-Customer (KYC) verification. But even KYC checks and government IDs can be faked thanks to AI tools, a 404 Media report revealed this month.The FTC’s latest push against AI-powered impersonation adds to a growing trend at the agency, showing a broader move toward holding tech companies and platforms accountable for their products and services. Back in December, the FTC moved to update its Children’s Online Privacy Protection Act to put more of the legal burden to protect kids online in the hands of websites and social media platforms. And earlier this week, the FTC warned tech firms, including AI companies, that quietly revising a privacy policy to capture more user data could be illegal.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
[ad_2]