1

The Deepfake is here. How to Protect Your Business

Ci spiace, ma questo articolo è disponibile soltanto in Inglese e Ucraino.

Amid pandemic disruptions, burnout, and geopolitically motivated cyberattacks, what are the challenges faced by business and security teams?

 

A new VMware, Inc. Global Incident Response Threat Report reveals that security teams are dealing with increased cyberattacks since Russia invaded Ukraine and other emerging threats such as deepfakes, attacks on APIs, and cybercriminals targeting incident responders themselves.

 

According to Rick McElroy, principal cybersecurity strategist at VMware, “Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls. Two out of three respondents in our report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method. Cybercriminals have evolved beyond using synthetic video and audio simply for influence operations or disinformation campaigns. Their new goal is to use deepfake technology to compromise organizations and gain access to their environment.”

 

The rising use of deepfakes is a growing threat to businesses and society as technology advances. Here are three tips for companies to combat this threat.

 

Technological improvements advance business and our societies significantly. However, progress also brings new risks that are difficult to deal with. Artificial Intelligence (AI) is at the forefront of emerging tech. It is finding its way into more applications than ever.

 

From automating routine tasks to identifying hidden business drivers, AI has immense business potential. However, malicious AI use can harm organizations and lead to an extreme loss of credibility.

 

The FBI recently highlighted a rising trend caused by remote work adoption where threat actors used deepfakes to pretend interviewees for jobs in American companies. These actors stole U.S citizens’ identities, aiming to gain access to company systems.

 

How can companies combat the rising use of deepfakes? Here are a few ways to mitigate security risks.

 

Verify authenticity

 

Getting back to the basics is often the most efficient way to combat advanced technology. Deepfakes are created by stealing a person’s identifying information, such as one’s pictures and ID information, and using an AI engine to make their digital likeness. Often, hackers use existing video, audio and graphics to imitate their victim’s manners and speech.

 

A recent case just demonstrates how malicious actors make use of this technology. A series of European political leaders believed they were interacting with the Mayor of Kyiv, Vitali Klitschko, only to be informed that they had interfaced with a deepfake.

 

Companies face deepfake risks when interviewing prospective candidates for remote open positions. Rolling back remote work norms is not practical if companies wish to hire top talents these days.

 

However, asking candidates to display some form of official identification, recording video interviews and requiring new employees to visit company premises at least once immediately after hiring will mitigate the risks of hiring a deepfake actor.

 

While these methods won’t prevent deepfake risks, they reduce the probability of a malicious actor gaining access to company secrets. Like MFA (Multi-factor Authentication) prevents malicious access to systems, these analog methods can create roadblocks to deepfake use.

 

Other analog methods include verifying the applicants’ references, including their picture and identity.

 

Fight fire with fire

 

Deepfake technology leverages deep learning (DL) algorithms to mimic a person’s actions and mannerisms. The result can be spooky. AI can create moving images and seemingly realistic videos of us.

 

Analog methods can combat deepfakes, but they take time. One solution to quickly detect deepfakes is to use technology against itself. If DL algorithms can create deepfakes, why not use them to see deepfakes too.

 

In 2020, Maneesh Agrawala of Stanford University made a solution that allowed filmmakers to insert words into video subjects’ sentences on camera. Filmmakers rejoiced since they wouldn’t have to reshoot scenes due to faulty audio or dialogue. However, the negative implications of this technology were immense.

 

Aware of this issue, Agrawala countered their software with another AI-based tool that detected anomalies between lip movements and word pronunciations. Deepfakes that impose words on videos in a subject’s voice cannot alter their lip movements or facial expressions.

 

Agrawala’s solution can also be deployed to detect face impositions and other standard deepfake techniques. As with all AI applications, much depends on the data the algorithm is fed. However, even this variable reveals a connection between deepfake technology and the solution to fight it.

 

Deepfakes use synthetic data and datasets extrapolating from real-world occurrences to account for multiple situations. For instance, artificial data algorithms can process data from a military battlefield incident and gather that data to create even more incidents. These algorithms can change ground conditions, participant readiness variables, weaponry conditions, etc., and feed into simulations.

 

Companies can use synthetic data of this kind to combat deepfake use cases. By extrapolating data from current uses, AI can predict and detect edge use cases and expand our understanding of how deepfakes are evolving.

 

Leverage digital transformation and raise awareness

 

Despite the sophisticated nature of technology combating deepfakes, there is no long-term solution to deepfakes. However, companies can combat deepfakes by leveraging their digital postures and educating employees.

 

Deepfake awareness will help employees analyze and understand information. Companies can develop processes to verify identities in remote work situations and assure their employees will follow them due to deepfake threats.

 

Once again, these measures cannot combat deepfake dangers by themselves, but companies can build up a robust framework that minimizes deepfake threats.

 

The future will undoubtedly reveal new ways of combating this threat. Meanwhile, companies must remain aware of the risks deepfakes pose and work to mitigate them.

Related Posts

card__image

vv

Ci spiace, ma questo articolo è disponibile soltanto in Inglese e Ucraino. Over 709 million attempts to access phishing and scam websites in 2023 have been thwarted by Kaspersky’s anti-phishing system, marking a 40% increase compared to the previous year’s figures. Messaging apps, artificial intelligence platforms, social media services, and cryptocurrency exchanges were among the […]

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *