Introduction
In recent months, prospects and clients alike have been asking us about deepfakes. What are we doing about them? Have we detected any deepfakes in our system? Is there still any value in having voice biometric systems if it is easy to defeat them? And so on. There’s a lot of concern.
This article has been prepared to answer these and other related questions.
What Are Deepfakes?
Simply put, “deepfakes” are typically pictures or audio recordings or videos of people that have been generated by deep neural networks (DNNs). To create deepfakes, sophisticated DNN algorithms are trained on thousands of pictures, thousands of hours of speech, or thousands of hours of video clips. These algorithms “learn” the features that make a person unique, then a generalized DNN model is created. Subsequently, when these models are provided with more limited amounts of pictures, video, or audio from a specific person, they are to able replicate the unique features of that individual and generate a very realistic photo, audio clip, or video of that person.
Should You Be Concerned?
YES, and we are too! However, ours is a healthy and respectful concern for deepfake technology, not one of panic. You should not panic either, and you should certainly not consider doing away with your use of voice biometrics. There are many factors to consider, and we’ll dive into the important details throughout the remainder of this document.
But first, we’ll share summary rationale for continued use of voice biometrics:
Deepfakes in the News
There have been substantial improvements in DNN technologies in the past year, with numerous press releases and articles highlighting the powerful capabilities of DNNs. Consider OpenAI and its ChatGPT product. While not relevant to the purpose of this document, a couple years ago not many people knew about this generative AI technology or even OpenAI. Now, everyone knows about ChatGPT.
More relevant to this document, in the past year, substantial venture capital investments have been made in numerous companies developing speech-related generative AI tools. Many press releases, blog posts, and articles quickly ensued. Relative to negative uses of deepfake technology, consider this article.
It’s a scary thought that someone’s voice might be closely replicated with just a few seconds of speech. So, voice biometric companies are naturally very concerned. The good news: voice biometric vendors have all known that these tools have existed for quite some time. And most of the responsible vendors have a variety of tools and techniques to address deepfakes. More on this later.
But deepfakes are not a “sudden” issue. The potential misuse of voice-related deepfake technology was highlighted almost three years ago in a story about a $35M wire fraud in Hong Kong. It was the first widely reported (and large) failure due to a deepfake -- and there will no doubt be others reported in the future if adequate steps are not taken to protect against deepfakes.
Bypassing Voice Biometrics?
The $35M wire fraud described above used a sophisticated synthetic speech (deepfake) attack. However, this successful breach was ultimately due to exploited procedural and human errors. And there was no apparent voice biometric anti-spoofing technology in place. This point is worth mentioning, as most voice biometric vendors offer some form of technology to detect synthetic speech.
But, is this technology foolproof? The honest answer is that no technology is 100% foolproof. And in fact, there have been numerous stories in the news highlighting how someone has been able to bypass voice biometric authentication systems using off-the-shelf generative AI technology. From February 2023, consider this article from an investigative journalist who "broke into" his own account at Lloyds Bank.
Opportunity for Discussion
Some in our industry would argue that this is a sensational and irresponsible piece of journalism since there are issues with the methodology, and the overall message about voice biometrics is negative. However, we see this as an opportunity to discuss the issues and educate people on the myths and truths about deepfakes.
Article Truths or Myths?
In all fairness, there are some truthful elements in articles like these. But there are also some myths or mistruths being provided. Regardless, the investigative journalist's article should be viewed as a wake-up call for companies -- both voice biometric software vendors and the clients who use our systems. And while we don't like the negative message about voice biometrics, articles like these do provide us with guidance -- and motivation -- to continue enhancing our offerings. Relative to some of the misleading information that is out there, consider some of these points:
To this last point, why wouldn't anti-spoofing systems be in place? There are many reasons, the first of which is that they are relatively new (and deepfakes are a relatively new issue in the industry). Many vendors are still working to release upgrades to their systems, us included. Also, some vendors charge extra for these tools, so some customers are likely trying to save costs. And, other existing deployments may be exposed not for lack of money or interest, but rather are not (yet) fully operational, as they may require significant system upgrades from legacy platforms, potential downtime, etc.
Deepfakes in Context
With some background of deepfakes provided, it's now a good time to look at the context or rationale behind deepfakes, when and where they are most likely to be attempted, etc. Below are several topics that we feel are worth mentioning.
Compliant vs. Non-Compliant
One key question to ask is whether your users are complaint or non-compliant? By this, we are referring to their willingness and motivations to use voice biometric authentication. Non-compliant users are likely to be those who are mandated to use voice biometrics – for example, those under parolee monitoring situations. Compliant users are those who are interested in the further protection of their accounts – for example, banking and brokerage users, healthcare account users, and similar scenarios.
Non-compliant users have greater motivation to attempt deepfake attacks on their own accounts (self-collusion), so keep this in mind if you have non-compliant users.
Automated vs. Interactive
If you have an IVR system (or conversational AI or "bot") you may be more susceptible to deepfakes. The reason is that these systems are automated (not monitored by humans) and use short-duration speech for authentication (which is easier to synthesize). Compare this to call centers, which are far more interactive in nature. Longer passages and conversations are extremely difficult to fake, as there will be significant response delays, unnatural responses, etc. A call center agent will quickly know they aren’t speaking with a real person.
Friendly Fraud
Those in the banking world know that much bank fraud is “friendly fraud” – committed by family members or friends who have access to your home phone/network or cellphone, know or can find out your knowledge-based-authentication (KBA) responses, etc. Unfortunately, family and friends are also far more likely to be able to record your voice. Together, these people effectively have comprehensive insider knowledge abouyt you and can bypass components of MFA more readily.
Beware of Social Media
It's important to mention social media, as it is enabling a new kind of friendly fraud. Simply put, complete strangers can now potentially access your face and voice, as pictures, videos, podcasts, and other recorded media are freely and openly posted to social media platforms.
Technical Acumen
While off-the-shelf generative AI tools such as those from ElevenLabs are making it easy to create deepfakes, it's also important to realize the amount of technical acumen required to break into systems protected by voice biometrics. You need to collect adequate speech samples from the target, you need to setup the deepfake software to create a realistic synthetic speech model for the target, you need to bypass all other security factors of the system you're trying to break into, and you need to inject the deepfake speech (quickly and accurately) into the live session.
Given this last point, we believe it is far more likely for deepfakes to be deployed in highly researched, isolated, and individualized scenarios.
The article about the $35M wire fraud case is a good example. Also, there has been an significant recent uptick in deepfakes relative to "hostage" or "travel emergencies" where deepfakes target concerned family members (especially the elderly), with the deepfaked person being in a very stressful situation, where they urgently need money, etc.
Key Recommendations
While the above-referenced article and points of consideration are far from comprehensive, we can immediately recommend best practices to help manage the threat of deepfakes. There are two key recommendations to make for all deployments:
- 1 Implement Multi-Factor Authentication. This is our #1 recommendation, and it is not new. It’s critical to have a layered approach to security. The investigative journalist’s article showed that the author clearly stacked the deck in his favor. However, it remains highly unlikely that a fraudster would be able to bypass multiple security factors, have recordings of the true customer, and be able to interject them into an IVR session without detection. To recap, minimal MFA requirements are:
- Something You Know. This could be a password, shared secret, or knowledge-based-authentication question.
- Something You Have. This could be an ID (physical or digital), a token, cellphone, or other identifying item you possess.
- Something You Are. This is a biometric factor, or specifically, using a voiceprint in our case.
- 2 Implement Anti-Spoofing Measures. We have a sophisticated, DNN-based anti-spoofing engine to help detect synthetic speech (deepfakes). Given recent events in the news and our industry as a whole, we’ve decided to update our legacy platform to always perform anti-spoofing checks on every sample that is submitted to the system. We return a specific error code so that appropriate actions can be taken (per client specifications).
Other Recommendations
Earlier we identified IVR systems, and particularly those with non-compliant users, as having a greater likelihood for deepfake attacks. With MFA and anti-spoofing in place, these additional recommendations may make sense for certain clients:
- 3 Implement Randomness. The IVR passphrase the investigative journalist used was a common, static passphrase provided by several competitors. Static passphrases have always been susceptible to recorded playback attacks as the speech content is known ahead of time. Random passphrases or digits will force fraudsters to dynamically generate responses in a timely manner – a more difficult task. If you have a static passphrase, consider swithing to our RandomPIN™ use case.
- 4 Use Outbound Calling. If you have an application-triggered authentication need (vs. using IVR as a front-end to your call center) consider outbound calls to a trusted (registered) phone instead of inbound calls to your IVR. This adds yet another factor to random prompts, strengthening authentication.
- 5 Implement Response Timers. This goes hand-in-hand with IVR sessions. Don’t allow your IVR dialogs to give users too much time for callers to respond. Fail them after a short duration and generate another (different) random prompt. If you are using our built-in IVR dialogs, we are already managing this for you.
- 6 Limit Retries. Don’t allow users many or unlimited attempts to complete their IVR session. We recommend no more than 3 total attempts per IVR session. Again, if you are using our built-in IVR dialogs, we are already managing this for you.
- 7 Implement Failure Detection. Failure detection is another feature we support on our platform. We can detect X failures in Y minutes within our system, something that can be useful if someone is specifically targeting an account and making multiple attempts to break in.
Other Considerations in the Fight against Deepfakes
Although not explicitly stated, it’s clear that voice biometric vendors and their clients must work together to properly configure the voice authentication system(s) in place. What’s also clear is that deepfakes are not going anywhere soon. Below is a summary of key initiatives in response to deepfakes:
Voice Biometric Vendor Initiatives
Those of us in the voice biometrics industry have always been developing techniques to catch fraudulent speech samples. We develop measures, fraudsters then develop countermeasures. We then develop counter-countermeasures, and so on. This is yet another chapter of this cycle, and one which we’ll likely be caught in for some time. So, we are spending a considerable amount of time on our vendor-agnostic deepfake detection tools.
A properly configured voice biometric system, within the context of a multi-factor authentication scheme, and AI-based deepfake detection tools, will provde the best defense against voice deepfakes.
Watermarking is another promising area we are investigating. Essentially, original audio samples are marked in such a way as to help detect if their presence has been removed or altered -- alerting us to an unnatural process being run on the recording.
Government Initiatives
Due to the many articles about deepfakes, and the use of generative artificial intelligence systems in general, there have been multiple calls by congress and other government officials to study these threats and develop appropriate legislation. Google “Sam Altman on Capitol Hill” to gain more insights. He is the CEO and co-founder of OpenAI and is trying to raise awareness and develop responsible usage guidelines for this technology. There will no doubt be numerous new laws and regulations proposed in the near future relating to deepfake technologies.
Generative AI Vendor Initiatives
Early on, Microsoft saw the potential for misuse of its VALL-E technology and limited how people could access and use it. Google’s conversational AI, “Bard”, was also released with ethical limits. And OpenAI has decided to not release it's VoiceEngine product widely due to concerns over misuse. Many other vendors now require consent and attestation statements that you won't use their technology for illegal uses.
And, some companies have released deepfake detection tools so you can test if a suspect recording was created by them (or not). These tools rely on vendor-specific "watermarks" that allow them to detect their signature in a recording.
These are promising initiatives, but clearly more needs to be done as these Generative AI tools get better and better over time.
Conclusions
First and foremost, it’s important to recognize that deepfakes are a valid and growing concern for all of us. We treat malicious use of synthetic speech and voice conversion technologies very seriously. And for years now, our team has been dedicating significant resources to the research and study of these tools – both from the creation standpoint and the detection standpoint.
And while many of the recent news articles about deepfakes have headlines and content that are somewhat deceptive, these articles do help to keep vendors in our industry honest – and motivated – relative to developing increasingly better technologies for our clients.
And second, for Customer Not Present (CNP) applications, voice biometrics often remains the best (and only) “something you are” factor available. If you need to call a call center or IVR system, fingerprints won’t help, facial recognition systems won’t help, nor will other forms of biometrics. Voice biometric systems remain an easy and cost-effective part of your MFA strategy.
- Abandoning voice biometrics because of isolated use cases is not the answer. It makes no sense at all to go back to using only User ID and Password or a KBA process. These have been proven to be weak security factors, easily hacked or discoverable via social engineering.
- Voice biometrics are not 100% perfect (no factor is), but their use provides you with a statistically much greater level of confidence vs. not using it at all. This is true regardless of the potential presence of deepfakes.
We prepared this document to help our clients, prospects, and others understand deepfakes, what we recommend you do to protect your voice biometric systems from them, what we’re doing in our on-going efforts with anti-spoofing technology, and what others are doing to help. We’ve also tried to provide some context into which scenarios are most likely to see deepfake attacks.
Finally, we realize the topic of deepfakes is highly complex. Should you have any remaining questions, or would like to discuss your specific concerns and needs relative to deepfakes, please don't hesitate to contact us.