Protecting Against Deepfakes: A Cybersecurity Roadmap for 2025
Artificial intelligence (AI) is rapidly transforming our world, offering incredible advancements while simultaneously presenting unprecedented challenges.
One of the most alarming of these is the rise of deepfakes – AI-generated synthetic media that can convincingly impersonate individuals and fabricate events.
As we embrace the wonders of AI, we are also confronted with its potential for misuse, particularly in the hands of cybercriminals.
In this post, we present insights from the Techronicler community of leaders and experts and reveal ideas strategies being employed to combat deepfakes in 2025.
Read on!
Alex Ford
To tackle the rising threat of deepfake-enabled cyberattacks, my top advice is to focus on advanced identity verification (IDV) using tools like biometrics and AI.
Businesses, especially in finance and other regulated sectors, should adopt dynamic KYC processes that go beyond basic checks, adding multi-factor authentication and liveness detection to stop fraud in its tracks.
For individuals, staying vigilant is key. Limit the personal information you share online, enable biometric security on devices, and be cautious of suspicious requests for sensitive details.
Finally, collaboration matters. Businesses can work together on shared fraud detection networks, while governments play a role in setting standards for IDV technology.
By combining smart tech, personal awareness, and teamwork across industries, we can outsmart deepfake threats and protect identities.

Alex Ford
President, North America – Encompass Corporation
Christian Garner
My number one recommendation to people looking to protect themselves from AI-driven cyberattacks is to trust your gut!
This may seem unintuitive, but this is the first step before taking action – you must understand or detect that you are receiving an unusual request. These requests are often time-sensitive, inciting a sense of urgency on the recipient’s part.
Security awareness training should be part of any organization’s toolkit to combat cyberattacks. It will enhance your staff’s ability to detect potentially malicious requests – hone your “spidey senses,” if you will.
If you find yourself on the receiving end of one of these attacks, challenge the requester if you’re on a voice or video call. While they may have some intel on you, the company, or the person they are impersonating, they won’t have all the answers.
Lastly, if it’s a text-based request, you can use AI detector tools to snuff them out.

Christian Garner
Security Consultant, CG Security Consulting
Ariel Tiger
Companies, especially fintechs, must align their innovative business models with strong fraud prevention and compliance frameworks.
Striking the right balance is crucial.
In such a heavily regulated industry, accountability is key—if you’re facilitating harmful commerce, you’ll face serious consequences.
You have to ‘know before you grow,’ as regulators expect companies to protect customers and the ecosystem. Those that do will thrive and drive sector growth.
I’ve seen firsthand how this approach sets businesses apart and fuels sustainable success.

Ariel Tiger
CEO, EverC
Seth Geftic
Verify requests using distinct channels: If you receive an unusual request through a video or audio call, make sure you verify it using a separate channel. An urgent phone call requesting a financial payment could be verified with the sender over email or private message. Establishing code words is also wise.
Know the signs: Deepfakes can be convincing, but there are signs that reveal the scam. Watch carefully to see if the audio matches the speaker’s mouth movements or listen for snippets of audio that don’t sound authentic.
Use MFA: Businesses should protect data and systems with security protocols like multi-factor authentication (MFA). As this requires attackers to bypass several layers of security, MFA can secure sensitive information from theft.
Limit voice and video data: While it won’t always be possible, businesses and individuals should be mindful of how much voice and video data they share. As deepfakes are created using this information, limiting its availability can prevent misuse.

Seth Geftic
Vice President of Product Marketing, Huntress
Steve Tcherchian
We’re seeing more instances of deepfake technology creating hyper-realistic but entirely fabricated audio and video to mislead voters, smear candidates, or undermine trust in electoral systems.
Just open social media. Although a lot of them are hilariously funny, it’s hard to tell what’s real and what isn’t.
This weaponization of the technology coupled with the rapid spread of misinformation across social media can have real-world consequences that influence public perception and even election outcomes.
Addressing these threats requires a multi-layered approach.
We can talk all day about enhancing cybersecurity defenses for election infrastructure, but we need to focus on developing better tools for detecting and mitigating deepfakes and fostering greater public awareness of misinformation tactics.
There are just too many examples even today of the lack of cybersecurity oversight in our election process.
Having worked nearly my entire career in cybersecurity, I have seen firsthand how quickly evolving technology can outpace traditional security measures, making it critical for election officials, tech companies, and voters alike to stay vigilant and informed.
This must be a global challenge, and only a trusted and robust cooperation between governments and the private sector will safeguard elections from emerging risks.

Steve Tcherchian
CISO, XYPRO.com
Devin Olsen
In the field of cybersecurity, specifically in authentication and authorization, there is an idea called ‘zero trust.’
Boiled down to its most basic, zero trust means not explicitly trusting any connection, no matter what history exists. Every communication is a new scenario and should be treated with full security measures.
Extending that principle into every digital interaction is going to be the best way to protect ourselves from deepfake attacks. You should treat every communication with the same skepticism you’d give to a stranger approaching you on the street, especially if that communication comes with a link or a request for something.
Given that deepfake attacks are going to be able to simulate digital human interactions, both through voice and video, you can’t blindly trust any communication without extra supporting evidence.
In 2022, Samsung released information on MegaPortrait, an AI tool that can create a deepfake video from a still image.
Audio deepfake technology has been widespread since 2020, with the most recent misuse of it coming in the 2024 New Hampshire Democratic presidential primary, where a political analyst paid $500 for automated phone calls mimicking President Joe Biden.
Adopting a zero-trust mindset, combined with thorough security and phishing awareness training, will be the best protection against this new wave of deepfake cyberattacks.

Devin Olsen
Senior consultant
Robert Siciliano
Always be skeptical of unsolicited messages from those requesting sensitive information or financial transactions.
Scrutinize requests especially if they seem unusual or come from unexpected sources. Cybercriminals may use AI-generated content to create convincing phishing emails or messages.
When encountering video or audio, it’s essential to consider the tone of the message. Does the language and phrasing align with what you’d expect from your boss, family member or the person delivering it?
Before taking any action, take a moment to pause and reflect.

Robert Siciliano
Security Awareness Trainer, Protect Now, LLC
Craig McGill
Deepfake attacks being on the rise is a fear for many businesses and individuals alike.
What makes it worse is that it’s an ongoing arms race because the techniques, the technology and the persuasion tactics used keep getting more and more sophisticated. It’s something that’s a big concern in the contact centre industry for obvious reasons.
On both sides, having a good deterrent starts with knowledge.
From a company’s perspective, it starts with informing customers – through channels that they will engage with and for many that may even be old-fashioned post – about what they will and won’t ask for when they contact them.
For example, state that you’ll only call between certain hours, you’ll never ask for certain information, and so on.
And make sure that information is everywhere – websites, letters, emails. It may seem like overkill but that’s a message that needs to be hammered home more than once, especially when dealing with vulnerable people. But it’s vital for businesses to look at how they engage and accept that one format may not suit all. We live in an omnichannel age and different age groups respond in different ways.
For customers, they need to make sure that they know how the companies they deal with would contact them and what they would ask for – even write it down or have a relative write it down. This deals with basic fraud attempts.
Where it can get more challenging is if you get a text from someone pretending to be a relative looking for money or worse, a deepfake that sounds like the relative.
The most simple step is the oldest: no matter what someone is asking for or offering and no matter how convincing it sounds, just take five minutes to check.
If someone is calling you via WhatsApp, Facebook or another tool and you don’t recognise the account, hang up or stop chatting and contact the person via the details you personally know to be true.
Another thing to bear in mind – and this plays havoc with some of my family members who love to natter on the phone to anyone who calls them – is that if you aren’t sure who or what you are speaking to, say as little as possible because then the technology recording your call doesn’t get enough data to sample and recreate your voice as an authentic deepfake.
The other element to this, ironically, is that the same technology that is enabling potential escalations of fraud can play a key role in fighting it.
Technologies like voice biometrics and multi-factor authentication provide robust protection by making it increasingly difficult for malicious actors to exploit deepfake tactics. Equally, AI-powered anomaly detection and behavioral analytics enable us to identify and flag suspicious interactions in real time.
Additionally, integrating machine learning models with fraud databases, systems can proactively identify and mitigate risks before they escalate. Also, secure communication channels, such as end-to-end encryption, can help ensure the integrity of Interactions.
Craig McGill
Senior Manager, 8×8
V. Frank Sondors
My top recommendation for protecting against these attacks is to build strong authentication systems and educate teams about recognizing manipulation. Multi-factor authentication (MFA) can help safeguard teams against potential threats.
Deepfakes are becoming advanced in that they can convincingly mimic voices or visuals, which is why having extra layers of identity verification is essential. In my own experience, adding measures like one-time codes or biometric scans has been a game changer, especially when dealing with sensitive communications or financial transactions.
For instance, we implemented these protocols after seeing how easy it was for fake voice recordings to sound legitimate. Knowing that even a simple two-step verification process can prevent a costly mistake gives me peace of mind.
We once encountered a phishing attempt that used a voice clone of an executive to request sensitive information. Thankfully, our team was trained to validate such requests through secondary channels, which prevented a breach.
Regularly updating employees on these threats and implementing verification protocols saved us.
These attacks are becoming more sophisticated and often succeed because of human error or lack of awareness.
Businesses and individuals must adopt layered defenses and practice due diligence when something feels off. Staying informed and securing communication channels are the best ways to prepare for the challenges ahead.

Frank Sondors
Founder, Salesforge
John Wilson
AI is adept at mimicry and can be easily automated. These two factors will lead to an unprecedented level of social engineering attacks. So how can individuals and businesses protect themselves?
I predict that automation will lead to diminished returns for attackers. In my work as Senior Fellow, Threat Research at Fortra, I’ve seen the same HR employee targeted by multiple payroll diversion emails several times a day for weeks on end. Not just at one company, but across our customer base.
I’ve even witnessed multiple attackers impersonate the same employee on the same day. Victims are a finite resource. A finance or HR professional might fall for a particular scam once but will be unlikely to repeat their mistake. Even those individuals who seem to fall for every hustle will eventually run out of money to give to scammers.
Just as overfishing has resulted in dwindling marine populations, empty nets, and even the economic collapse of fishing communities, over-phishing will have a similar impact on the criminal gangs that rely on unenlightened victims for their livelihood.
Nearly all social engineering attacks rely on impersonation. Deepfake-enabled cyber attacks take impersonation to a whole new level, yet the advice remains the same.
Always verify a request to provide sensitive information or to move money through a secondary channel that you control.
If you receive a voicemail message from your CEO, call them back on the number listed in the corporate directory. If you receive an email or text message from someone in Corporate Security, reach out to them via Teams or Slack for confirmation.
Remember, the second channel is only useful when (1) you are the initiator, and (2) you control the source of the contact details.
Suppose you receive a text and a phone call from your bank. Call them back using the number on their website or the back of your debit card.
Real-time communications require a different strategy.
For example, suppose a business contact, family member, or other trusted contact makes an unusual request via Facetime.
Ask them a question that isn’t public knowledge before providing money, login credentials, or bank account details. Or ask them something likely to evoke a particular response, for example, ask your seafood-hating cousin if she wants to grab sushi the next time she’s in town.
Ten years from now we may need blade runners to discern between humans and lifelike replicants, but for the next few years we’ll be able to get by with a secondary communication channel coupled with a simple question or two to avoid being fooled.

John Wilson
Cybersecurity Professional, Fortra
On behalf of the Techronicler community of readers, we thank these leaders and experts for taking the time to share valuable insights that stem from years of experience and in-depth expertise in their respective niches.
If you wish to showcase your experience and expertise, participate in industry-leading discussions, and add visibility and impact to your personal brand and business, get in touch with the Techronicler team to feature in our fast-growing publication.