AI, Deepfakes, and Cybersecurity: Leaders with Tips on Staying Safe
As artificial intelligence continues to advance, so does its potential for misuse.
Deepfakes, once a niche technology, are now being weaponized by cybercriminals, creating a new wave of sophisticated attacks that are difficult to detect and defend against.
To offer a glimpse into practical solutions and expert guidance for navigating this evolving threat landscape, we asked leading business, tech, and security professionals from the Techronicler community to help.
Here. these leaders share their single most important recommendation for individuals and businesses looking to protect themselves from the anticipated rise of AI-driven, deepfake-enabled cyberattacks in 2025.
Their insights provide actionable steps you can take to bolster your defenses and stay safe in the age of AI deception.
Read on!
Nadia Bonini
As a cybersecurity expert, people often ask me: ‘How can I protect myself/my business from AI-based, deepfake attacks? Here are my top four recommendations:
1. Learn to trust your guts and educate your employees about what to look out for in deepfake media. Remember: ‘If it looks like a duck, walks like a duck, and quacks like a duck, then it probably is a duck.’
2. Use multi-factor authentication (MFA) and secret keywords. Organizations should implement internal policies and processes requiring additional verification steps for important requests (e.g., password reset, transfer of large amounts of money).
3. Invest in advanced threat detection tools. These tools leverage the power of artificial intelligence (AI) and machine learning (ML) to identify fake images, audio, and videos.
4. Add a C2PA or Content Authenticity Initiative (CAI) compliant cryptographic signature to your digital media. It won’t protect you from attacks. However, it’ll boost trust in your organization by asserting the authenticity of your media. As an individual, learn how to verify the signature.
Nadia Bonini
Professional SEO, Tech Writer, & Cybersecurity Expert
John Smith
Staff Development via Education and Training: Education serves as the primary line of defense for firms. To help their staff be aware of and combat deepfakes, organizations should hold training sessions on a regular basis.
Staff members should be alert to the possibility that criminals may utilize audio or video recordings to pose as clients, partners, or executives in order to con them.
In a similar vein, people need to be aware of the most recent developments in deepfake and how to spot altered information.
John Smith
Founder, Sparkaven
Benjamin Foster
Advocating for Moral and Lawful Practices: The employment of artificial intelligence and deepfake technologies requires the establishment of transparent rules and ethical standards by both governments and business executives.
Both businesses and people can do their part to combat harmful deepfake usage by advocating for and supporting such policies.
We can all do our part to fight back against these dangers by reporting questionable content and collaborating with authorities when required.
Benjamin Foster
Senior Manager, UK Expat Mortgage
Harrison Tang
As Co-Founder and CEO of Spokeo, I recognize the critical importance of safeguarding our platform against AI-driven deepfake threats.
With my experience as a visionary leader, honored with the prestigious Ernst & Young Entrepreneur of the Year Award in 2015, I understand the value of proactive defense.
To protect Spokeo, I will enhance media literacy, training our team to scrutinize unusual requests and ensure verification and authentication. Implementing robust security protocols, including multi-factor authentication and regular software updates, will further strengthen our defense.
By leveraging AI-powered detection tools and adopting a zero-trust approach, verifying everything, including users, devices, and data, Spokeo will maintain its reputation as a trusted and innovative people search engine.
This comprehensive strategy will protect our users’ sensitive information, ensuring continued success in 2025 and beyond.
Harrison Tang
CEO and Co-founder, Spokeo
Elvis Sun
Using authentication codewords is essential to preventing deepfake cyberattacks.
This system rotates special words or phrases that are only known by authorized personnel once a week or once a month.
Organizations should securely distribute codewords, ideally through in-person meetings or encrypted channels. The system requires these preset codewords for sensitive communications, particularly financial transactions or vital system access.
Success hinges on maintaining the confidentiality of these codewords and promptly invalidating and replacing them in the event that a breach is suspected.
Elvis Sun
Founder, PressPulse
Andy Golpys
As I lead Madebyshape, I recognize the importance of safeguarding our reputation and assets from AI-driven deepfake cyberattacks.
These threats pose a significant risk to our brand’s integrity, and it’s crucial I take proactive measures. I’ll train our team to scrutinize unusual requests, verifying authenticity and establishing layered security protocols to prevent unauthorized access.
Staying informed on deepfake tactics and best practices is vital. I’ll leverage AI-powered detection tools to identify potential threats and ensure our systems are up-to-date.
By prioritizing media literacy and verification, I’ll shield Madebyshape from deepfake threats, protecting our clients, employees, and partners.
Effective cybersecurity is a top priority. I’ll regularly review and update our incident response plans, ensuring seamless communication and swift action in case of a breach.
Madebyshape’s success and reputation depend on our ability to adapt and innovate. By staying vigilant and investing in cutting-edge security solutions,
I’m confident we’ll navigate the evolving cyber landscape and maintain our position as a trusted industry leader in 2025 and beyond.
Andy Golpys
Founder, MadeByShape
Kalim Khan
To safeguard against AI-driven, deepfake-enabled cyberattacks, I recommend businesses prioritize strong identity verification methods, such as multi-factor authentication and biometric systems.
Continuous employee training on recognizing deepfake content and malicious tactics is also crucial.
Legal protections, like cybersecurity insurance, also provide an added layer of defense.
Kalim Khan
Founder, Affinity Law
Nicos Vekiarides
Fighting Deepfakes with Better AI: As cybercriminals harness generative AI for more sophisticated deepfake attacks, businesses must apply the same technology to fight those attacks.
New AI technology can analyze and authenticate documents, images, voice, and video media. Forensic scanning looks for anomalies in the source data to see if it has been altered.
For example, falsified images may show blurred lines or inconsistent shadows, or the metadata may be modified, indicating it has been changed. Documents can be analyzed for textual changes or changes to embedded images.
Validated documents can be fingerprinted to validate their authenticity later. The fingerprints can be stored using blockchain technology to ensure they can’t be changed. Many authentication systems use tamper scoring for risk profiling and to gauge the trustworthiness of digital materials.
Using solutions like the Attestiv deepfake detection platform allows anyone to analyze videos or social links to videos for deepfake content. Powered by proprietary AI analysis that provides scoring and a comprehensive breakdown of fake elements, these technologies pinpoint exactly where they are found in each video.
As AI technology advances, deepfakes will become harder to identify, and financial service companies, insurance companies, and media outlets will need more sophisticated AI tools to detect them.
With deepfakes becoming so prevalent, building authentication protocols for every financial transaction, insurance claim, and media post makes sense.
The fraudsters aren’t going to go away, but we can all be more diligent in verifying digital content and materials rather than accepting them at face value.
Nicos Vekiarides
CEO & Co-founder, Attestiv
Arvind Rongala
For individuals: Make media literacy a top priority. Develop the ability to critically assess digital material; if a picture, voice message, or video appears odd or dubious, make sure it’s real.
For instance, get in touch with the purported sender personally to confirm unexpected requests for cash or private information. Consider it similar to receiving a call from a “relative” who is having problems; always acquire confirmation before offering assistance.
For companies: Make an investment in cutting-edge authentication systems. Biometrics, blockchain-based identity verification, and multi-factor authentication (MFA) can all greatly lessen the threat of deep fakes. Regularly test your defenses with simulated attacks and teach staff to identify typical attack patterns.
For instance, a business could mandate that staff members use secondary channels or in-person confirmation when making high-stakes choices, such as money transactions. This is comparable to a bank requesting a second form of identification prior to authorizing significant withdrawals.
Both sides: Make use of resources such as deepfake-detection software and practice good cybersecurity hygiene, which includes frequent upgrades and threat analyses.
To remain ahead of dangers, it’s similar to installing high-quality locks and routinely inspecting your home’s security system.
Edward Tian
Businesses should constantly be working on improving their cybersecurity efforts. There is a reason why it’s often required to change business account passwords so regularly, for example – cybercriminals are smart and you want to remain a step ahead.
The thing about AI is that it’s only made cybercriminals that much smarter, and when it’s used by bad actors, cyberattack attempts have a higher likelihood of success.
So, constantly improving cybersecurity is crucial so that businesses can stay ahead of the rapidly-evolving efforts of AI-driven attacks.
Also, just as AI can be used for harm in this way, it can also be used for protection on the business end of things. AI is helping a lot of businesses make their cybersecurity as effective as possible, especially when up against bad AI.
Edward Tian
CEO, GPTZero
On behalf of the Techronicler community of readers, we thank these leaders and experts for taking the time to share valuable insights that stem from years of experience and in-depth expertise in their respective niches.
If you wish to showcase your experience and expertise, participate in industry-leading discussions, and add visibility and impact to your personal brand and business, get in touch with the Techronicler team to feature in our fast-growing publication.