The Hiya Voice: The caller ID and call protection blog

How to detect and defend against deepfake voice scams

Written by Shaun Kehrberg | Sep 23, 2024 9:05:05 PM

Phone scams have been around since practically the invention of the landline. But there’s a dangerous new element making scam calls even more convincing: the use of deepfake voice technology.  

A deepfake can be any type of media — audio, video, image— that has been digitally manipulated, typically for the purpose of misleading people or changing the original meaning. A voice deepfake, sometimes known as AI voice cloning, uses artificial intelligence to construct spoken sentences that sound like people saying something they did not actually say. This voice track can even be placed underneath real video footage, making it appear as if a person has endorsed or carries an opinion contrary to their own. The coupling of real video and false audio can make these deepfakes more challenging to detect and flag. 

Deepfake voice scams aren’t new, but AI is making them more real and easier to create. In the past, deepfakes required a significant amount of time and technical expertise. Today, there are more than a dozen AI voice cloning tools freely available on the internet. These tools require only a basic level of expertise to use. 

RELATED: See how our Chief Product Officer Faked the Voice of Hiya's CEO 

 

Examples of deepfake voice scams
  • Family emergency scams – Family emergency scams, sometimes called loved-ones scams or grandparent scams, involve a fraudster pretending to be a family member who is experiencing an emergency and in need of immediate financial help. In the past, fraudsters might simply imitate a sobbing child or grandchild, but now scammers can use AI voice cloning that makes the ruse ever more convincing. 
  • Employee impersonations – Employee impersonations using deepfake voice technology are also becoming more common. News reports tell of an employee paying out $25 million after being fooled by a deepfake video call impersonating the company’s CFO, and a hacker deepfaking an employee’s voice to gain access to the company’s computer systems.  
  • Political deepfakes – One of the most prominent deepfake voice scams occurred earlier this year, as the U.S. presidential election season was just getting underway. In the Democratic primary in New Hampshire, a deepfake voice clone of President Joe Biden was used in a robocall to voters urging them to skip primary voting and instead save their vote for the general election. 

Companies are beginning to take notice of this trend. In a survey conducted by the identity verification firm Regula, 37% of worldwide organizations have experienced voice deepfake identity fraud, and 82% or organizations think that voice deepfakes will be a growing threat in the next two years. 

 

Deepfake voice detection by humans is unreliable

According to a 2023 study conducted by researchers from the Department of Security and Crime Science at University College London in the UK, humans cannot reliably detect speech deepfakes. In an experiment conducted with more than 500 participants, individuals listened to real and fake audio clips and attempted to differentiate between them. The result: listeners could only correctly identify voice deepfakes 73% of the time. 

The study concluded, “As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.”

 

Governmental regulations seek to rein in deepfakes

The battle against deepfakes and AI voice scams needs to be fought on multiple fronts. Governments play a role too. In August, the European Union’s AI Act was enacted. It’s the world’s first major law regulating the use of AI, and it governs the way companies develop, use and apply AI.

 

According to NBC News, “The legislation applies a risk-based approach to regulating AI, which means that different applications of the technology are regulated differently depending on the level of risk they pose to society.” Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decisioning systems, educational scoring, and remote biometric identification systems.

The United States is also drafting laws to protect against impersonations and deepfakes. Earlier this year, the US Federal Trade Commission sought public comment on a proposal that would prohibit the impersonation of individuals, which would extend protections of a new rule on government and business impersonation

 

According to the FTC, “The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals. Emerging technology — including AI-generated deepfakes — threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud.”

 

Even US states are passing legislation to combat deepfakes. Recently, California Governor Gavin Newsom signed into law Assembly Bill 2839 (AB 2839). The law aims to address the rise of AI-generated disinformation ahead of 2024 elections, stating that AB 2839 is crucial for protecting the integrity of elections and restoring voter trust.

How to defend against deepfake voice imposters

Although scams carried out using deepfake speech synthesis pose a danger for both individuals and business, not all is lost. Fortunately, there is a way to identify deepfakes. While fraudsters are using AI for sinister purposes, the same technology can also be used for good.

Earlier this month, Hiya acquired Locus.ai, the industry leader in deepfake detection solutions. The combination of Loccus.ai’s voice intelligence technology and the Hiya’s Adaptive AI fraud prevention system, provides businesses and carriers with the most complete fraud call protection available across the industry today. This versatile solution supports multiple languages, formats, and platforms, including video recordings and live calls across all devices.

Loccus.ai’s deepfake voice detection technology is now available as Hiya AI Voice Detection. Businesses and carriers can immediately integrate AI voice detection into their own services and applications to enhance consumer trust and security. With Hiya, it’s never been easier to identify deepfake audio in personal communications apps, news and social media content, video conferencing contact center platforms, and more.

Request a demo of Hiya AI Voice Detection