All articles

Biden voice-clone robocall highlights AI-generated phone scams

The Joe Biden voice-clone robocall encouraging voters not to vote in the Democratic primary in New Hampshire was an eerie reminder that AI-generated robocalls and scam calls are here to stay.  

The New Hampshire primary in January marked the first contest for the Democrats in the 2024 U.S. presidential election, so we’ll likely see even more of these fake political messages — targeting various candidates and issues — leading up to the general election in November. 

On Jan. 22, NBC News reported on a robocall that appeared to be an unlawful attempt at voter suppression. The call featured a voice clone of President Biden directed at New Hampshire voters:

“It’s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”

Additionally, NBC noted that the call was spoofed. The phone number that showed up on recipients’ phones was that of the former chairperson of the New Hampshire Democratic Party.

Although unrelated to the Biden AI scam, it’s interesting to note that New Hampshire had the highest percentage of spam calls of any US state, according to Hiya’s Q3 2023 Global Call Threat Report

The threat of voice-clone scams

Even though the phony Biden call was small scale compared to some of the massive scam campaigns, it underscores the threat posed by AI-generated voice clones in political messages and in perpetrating phone scams. 

In our blog post, Voice fraud and the threat of generative AI, Hiya’s Chief Product Officer James Lau showed how easy it is for scammers to use readily available software to carry out phone scams. And while voice cloning technology existed during the last presidential election, the rapid advances in AI since then have made these calls possible by people with limited technological abilities.   

FCC makes voice-clone calls illegal

Less than three weeks after the Biden voice-clone call, the US Federal Communications Commission (FCC) unanimously adopted a ruling stating that calls made with AI-generated voices are considered “artificial” under the Telephone Consumer Protection Act (TCPA) and are, therefore, illegal. The TCPA is the primary law the FCC uses to help limit junk calls and requires telemarketers to obtain written consent from consumers before robocalling them. According to a press release issued by the FCC:

“The rise of these types of calls has escalated during the last few years as this technology now has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members. While currently State Attorneys Generals can target the outcome of an unwanted AI-voice generated robocall — such as the scam or fraud they are seeking to perpetrate — this action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable.”

Fighting back against unwanted calls

You may wonder if call protection services such as Hiya Protect can stop voice clone calls from reaching the public. We asked Hiya’s Sr. Product Manager, Jonathan Nelson, for the answer. 

“Hiya doesn’t listen to the audio of incoming calls, so we don’t know the content of a call before it reaches the recipient,” Nelson said. “However, the actual method of making the call is unchanged. So everything we do to detect spam calls works just as well against voice clone campaigns.”

One of the weapons in Hiya’s spam-fighting arsenal is its use of Adaptive AI, the industry’s first self-learning spam protection system that adjusts to the latest fraud and nuisance calls. Unlike other solutions, Adaptive AI has four layers of protection that together analyze every aspect of the phone call: from the phone number to the call recipient, the enterprise making the call, and the characteristics of the call itself.

 

Hiya’s Adaptive AI technology analyzes every facet of a call to provide the most complete and effective spam and fraud call protection available. 

  • Base protections analyze the phone number. This first layer of protection considers the characteristics and history of phone numbers making calls. This includes key compliance checks (such as number validity), characteristics of phone numbers (such as the type of line used), and trends in past calling behavior (such as call quantity and how recipients receive calls). 

  • Spam Threat Scanning uses data left behind by spammers in carrier network traffic. It analyzes the characteristics of a call — in real-time — to determine each unique call’s spam risk, regardless of the phone number. This is a crucial tool in identifying emerging new spam and fraud tactics.

  • Personal Call Filtering recognizes spam is best viewed through the eyes of the individual call recipient. One person’s unwanted call might be another person’s important call. Hiya Protect uses the person’s past interactions with callers to understand which connections are important to them and protect them from targeted attacks.

  • Enterprise Call Scoring assesses incoming calls based on the caller’s history across all the numbers they own, similar to how credit bureaus assign data-driven scores to analyze creditworthiness. This allows Hiya to assess the reputation of callers regardless of the numbers they’re calling from, a vital tool when spammers use tactics like switching phone numbers to evade spam labeling (a practice known as number rotation).

For more information about Hiya Protect, visit our website, or send us a message.

Author Andrea Moreno

Carrier Customer Marketing Manager

Subscribe to the Hiya blog

We publish a new post
about once a week.