Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Deepfake incidents are surging in 2024, predicted to increase by 60% or more this year, pushing global cases to 150,000 or more. Thatโs making AI-powered deepfake attacks the fastest-growing type of adversarial AI today. Deloitte predicts deepfake attacks will cause over $40 billion in damages by 2027, with banking and financial services being the primary targets.
AI-generated voice and video fabrications are blurring the lines of believability toย hollow out trust in institutions and governments. Deepfake tradecraft is so pervasive in nation-state cyberwarfare organizations that itโs reached the maturity of an attack tactic in cyberwar nations that engage with each other constantly. ย
โIn todayโs election, advancements in AI, such as Generative AI or deepfakes, have evolved from mere misinformation into sophisticated tools of deception. AI has made it increasingly challenging to distinguish between genuine and fabricated information,โ Srinivas Mukkamala, chief product officer at Ivanti told VentureBeat.
Sixty-two percent of CEOs and senior business executives think deepfakes will create at least some operating costs and complications for their organization in the next three years, while 5% consider it an existential threat. Gartner predicts that by 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation.
โRecent research conducted by Ivanti reveals that over half of office workers (54%) are unaware that advanced AI can impersonate anyoneโs voice. This statistic is concerning, considering these individuals will be participating in the upcoming election,โ Mukkamala said.
The U.S. Intelligence Community 2024 threat assessment states that โRussia is using AI to create deepfakes and is developing the capability to fool experts. Individuals in war zones and unstable political environments may serve as some of the highest-value targets for such deepfake malign influence.โ Deepfakes have become so common that the Department of Homeland Security has issued a guide, Increasing Threats of Deepfake Identities.
How GPT-4o is designed to detect deepfakes
OpenAIโs latest model, GPT-4o, is designed to identify and stop these growing threats. As an โautoregressive omni model, which accepts as input any combination of text, audio, image and video,โ as described on its system card published on Aug. 8. OpenAI writes, โWe only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.โ
Identifying potential deepfake multimodal content is one of the benefits of OpenAIโs design decisions that together define GPT-4o. Noteworthy is the amount of red teaming thatโs been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide.
All models need to constantly be training on and learning from attack data to keep their edge, and thatโs especially the case when it comes to keeping up with attackersโ deepfake tradecraft that is becoming indistinguishable from legitimate content.
The following table explains how GPT-4o features help identify and stop audio and video deepfakes.
Key GPT-4o capabilities for detecting and stopping deepfakes
Key features of the model that strengthen its ability to identify deepfakes include the following:
Generative Adversarial Networks (GANs) detection. The same technology that attackers use to create deepfakes, GPT-4o, can identify synthetic content. OpenAIโs model can identify previously imperceptible discrepancies in the content generation process that even GANs canโt fully replicate. An example is how GPT-4o analyzes flaws in how light interacts with objects in video footage or inconsistencies in voice pitch over time. 4oโs GANS detection highlights these minute flaws that are undetectable to the human eye or ear.
GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generatorโs goal is to improve the contentโs quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.

Voice authentication and output classifiers. One of the most valuable features of GPT-4oโs architecture is its voice authentication filter. The filter cross-references each generated voice with a database of pre-approved, legitimate voices. Whatโs fascinating about this capability is how the model uses neural voice fingerprints to track over 200 unique characteristics, including pitch, cadence and accent. GPT-4oโs output classifier immediately shuts down the process if any unauthorized or unrecognized voice pattern is detected.
Multimodal cross-validation. OpenAIโs system card comprehensively defines this capability within the GPT-4o architecture. 4o operates across text, audio, and video inputs in real time, cross-validating multimodal data as legitimate or not. If the audio doesnโt match the expected text or video context, the GPT4o system flags it. Red teamers found this is especially crucial for detecting AI-generated lip-syncing or video impersonation attempts.
Deepfake attacks on CEOs are growing
Of the thousands of CEO deepfake attempts this year alone, the one targeting the CEO of the worldโs biggest ad firm shows how sophisticated attackers are becoming.
Another is an attack that happened over Zoom with multiple deepfake identities on the call including the companyโs CFO. A finance worker at a multinational firm was allegedly tricked into authorizing a $25 million transfer by a deepfake of their CFO and senior staff on a Zoom call.
In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity professionals defend systems while also commenting on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election and threats posed by China and Russia.
โAnd if now in 2024 with the ability to create deepfakes, and some of our internal guys have made some funny spoof videos with me and it just to show me how scary it is, you could not tell that it was not me in the video,โ Kurtz told the WSJ. โSo I think thatโs one of the areas that I really get concerned about. Thereโs always concern about infrastructure and those sort of things. Those areas, a lot of it is still paper voting and the like. Some of it isnโt, but how you create the false narrative to get people to do things that a nation-state wants them to do, thatโs the area that really concerns me.โ
The critical role of trust and security in the AI era
OpenAIโs prioritizing design goals and an architectural framework that puts defake detection of audio, video and multimodal content at the forefront reflect the future of gen AI models.
โThe emergence of AI over the past year has brought the importance of trust in the digital world to the forefront,โ says Christophe Van de Weyer, CEO of Telesign. โAs AI continues to advance and become more accessible, it is crucial that we prioritize trust and security to protect the integrity of personal and institutional data. At Telesign, we are committed to leveraging AI and ML technologies to combat digital fraud, ensuring a more secure and trustworthy digital environment for all.โ
VentureBeat expects to see OpenAI expand on GPT-40โs multimodal capabilities, including voice authentication and deepfake detection through GANs to identify and eliminate deepfake content. As businesses and governments increasingly rely on AI to enhance their operations, models like GPT-4o become indispensable in securing their systems and safeguarding digital interactions.
Mukkamala emphasized to VentureBeat that โWhen all is said and done, though, skepticism is the best defense against deepfakes. It is essential to avoid taking information at face value and critically evaluate its authenticity.โ
source: https://venturebeat.com/security/how-gpt-4o-defends-your-identity-against-ai-generated-deepfakes/

