Opinions expressed by Entrepreneur contributors are their own.
Technological improvements advance business and our societies significantly. However, progress also brings new risks that are difficult to deal with. Artificial Intelligence (AI) is at the forefront of emerging tech. It is finding its way into more applications than ever.
From automating clerical tasks to identifying hidden business drivers, AI has immense business potential. However, malicious AI use can harm enterprises and lead to an extreme loss of credibility.
The FBI recently highlighted a rising trend caused by remote work adoption where malicious actors used deepfakes to pose as interviewees for jobs in American companies. These actors stole U.S citizens’ identities, intending to gain access to company systems. The implications for corporate espionage and security are immense.
How can companies combat the rising use of deepfakes even as the technology powering them grows more powerful than ever? Here are a few ways to mitigate security risks.
Going back to the basics often works best when combating advanced technology. Deepfakes are created by stealing a person’s identifying information, such as their pictures and ID information, and using an AI engine to make their digital likeness. Often, malicious actors use existing video, audio and graphics to mimic their victim’s mannerisms and speech.
A recent case highlighted the extremes to which malicious actors use this technology. A series of European political leaders believed they conversed with the Mayor of Kyiv, Vitali Klitschko, only to be informed that they had interacted with a deepfake.
The office of the Mayor of Berlin eventually discovered the ploy after a phone call to the Ukrainian embassy revealed that Klitschko was engaged elsewhere. Companies would do well to study the lessons from this incident. Identity verification and seemingly simple checks can reveal deepfake use.
Companies face deepfake encounter risks when interviewing prospective candidates for remote open positions. Rolling back remote work norms is not practical if companies wish to hire top talent these days. However, asking candidates to display some form of official identification, recording video interviews and requiring new employees to visit company premises at least once immediately after hiring will mitigate the risks of hiring a deepfake actor.
While these methods won’t prevent deepfake risks, they reduce the probability of a malicious actor gaining access to company secrets when deployed together. Like two-factor authentication prevents malicious access to systems, these analog methods can create roadblocks to deepfake use.
Other analog methods include verifying an applicant’s references, including their picture and identity. For instance, send the applicant’s picture to the authority and ask them to validate whether they know that person. Verify the reference’s credentials by engaging with them on official or business domains.
Fight fire with fire
Deepfake technology leverages deep learning (DL) algorithms to mimic a person’s actions and mannerisms. The result can be spooky. AI can create moving images and seemingly realistic videos of us when given just a few data points.
Analog methods can combat deepfakes, but they take time. One solution to quickly detect deepfakes is to use technology against itself. If DL algorithms can create deepfakes, why not use them to see deepfakes too?
In 2020, Maneesh Agrawala of Stanford University created a solution that allowed filmmakers to insert words into video subjects’ sentences on camera. To the naked eye, nothing was amiss. Filmmakers rejoiced since they wouldn’t have to reshoot scenes due to faulty audio or dialogue. However, the negative implications of this technology were immense.
Aware of this issue, Agrawala and his team countered their software with another AI-based tool that detected anomalies between lip movements and word pronunciations. Deepfakes that impose words on videos in a subject’s voice cannot alter their lip movements or facial expressions.
Agrawala’s solution can also be deployed to detect face impositions and other standard deepfake techniques. As with all AI applications, much depends on the data the algorithm is fed. However, even this variable reveals a connection between deepfake technology and the solution to fight it.
Deepfakes use synthetic data and datasets extrapolating from real-world occurrences to account for multiple situations. For instance, artificial data algorithms can process data from a military battlefield incident and gather that data to create even more incidents. These algorithms can change ground conditions, participant readiness variables, weaponry conditions, etc., and feed into simulations.
Companies can use synthetic data of this kind to combat deepfake use cases. By extrapolating data from current uses, AI can predict and detect edge use cases and expand our understanding of how deepfakes are evolving.
Accelerate digital transformation and education
Despite the sophisticated nature of technology combating deepfakes, Agrawala warns there is no long-term solution to deepfakes. This is a distressing message on the surface. However, companies can combat deepfakes by accelerating their digital postures and educating employees on best practices.
For instance, deepfake awareness will help employees analyze and understand the information. Any material circulating with information that seems outlandish or out of proportion can be called out instantly. Companies can develop processes to verify identities in remote work situations and assure their employees will follow them thanks to deepfake threats.
Once again, these methods cannot combat deepfake dangers by themselves. However, with all the previously mentioned techniques, companies can adopt a robust framework that minimizes deepfake threats.
Advanced tech calls for innovative solutions
The ultimate solution to deepfake threats lies in technological advancement. Ironically, the answer to deepfakes lies within the technology that powers them. The future will undoubtedly reveal new ways of combating this threat. Meanwhile, companies must remain aware of the risks deepfakes pose and work to mitigate them.