By Banji A.
The tech world is extremely intriguing. Over the past few decades, we have witnessed the creation and evolution of new processes and technologies which have made human lives a bit (or a lot) easier. In the creative tech arena for instance, Adobe has delivered a suite of photo and video modification, animation and editing technologies including Adobe After Effects, Premiere Pro, and the very notoriously popular Photoshop. Whether some of these new capabilities which have put a vast array of creative artistical options on our tables pose any significant national security or enterprise threats is debatable. What is however is not debatable is the clear damage that can be done with deepfakes, a more sinister step up on Adobe’s suite of artistic tools.
Deepfake—a combination of the words ‘deep learning’ and ‘fake’—refers to an AI-based technology used to create or alter images, audio, and video resulting in synthetic content that appears authentic. Deepfakes are computer-created artificial videos in which images are combined to create new footage that depicts events, speeches or action that never actually happened. They can be quite convincing and quite difficult to identify as false. If you are the type that worries about the sanctity and integrity of information like me, I bet the red flags are springing up right, left, and center in your mind. A piece of technology that offers the ability to look and sound like anyone, including those authorized to approve payments from a company, and give fraudsters an opportunity to extract potentially vast sums of money should get everyone worried.
A machine learning technique called a “generative adversarial network” or GAN can create fake videos by looking at thousands of a person’s photos and approximate those photos into a new image without being an exact copy of any one of the photos, increasing the difficulty of identifying the image as false. GAN is a multi-use technology that can be used to generate new audio from existing audio, or new text from existing text. This piece of tech is very widely used in the movie industry, which accounts for 99% of existing deepfake videos in circulation, mostly within the adult movie segment. GANs are becoming more widely available and some mobile apps already have it powering their consumer face and voice swap offerings. Imagine what damage can be done when a child responds to a real-looking video message from his parent to open the door to a supposed service contractor, who turns out to be anything but a service contractor? Better left to the imagination, right?
So concerned was the FBI about the dangerous misuse cases and security threats posed by deepfakes that it issued a public security warning in mid-2021. According to the FBI, hackers are now able to fake their way into a company by stealing a person’s personal information to apply for a job, then grab a picture of that person, manipulate the video and sound to impersonate that person during a remote job interview. Suddenly, if care is not taken, you have a fake insider on your hands with access to business data and proprietary information within the company and can now sell that information to competitors or foreign countries. Not where you want to be in today’s world where information is the most valuable business asset. This is just one on many damaging scenarios that deepfakes can inflict on an organization.
On the political side of things, you may have seen Jordan Peele’s video (not linked here due to language used in the video) depicting former president Barack Obama doing some trash talking, which of course did not happen. This gives you an idea of what level of misinformation and disinformation that can be unleashed on the population utilizing this little piece of tech. In one of the early cases of audio deepfakes, which underlines enterprise vulnerability to deepfakes, some crooks used AI to create an impersonation of a German company’s chief executive’s voice – and then used the audio to fool his company into transferring $243,000 to a bank account supposedly to pay some contractor for service rendered. Not good! When your customers think you lack basic presence of mind to double check on things, that does not reflect well.
Another scenario that could occur is a PR nightmare: Public opinion can head south by reason of fake videos depicting a CEO or other management staff saying or doing unseemly things. The same outcome may occur when deepfaked influential people share disinformation about a company or products. Not only can this impact reputation, but it can also influence consumer behavior, and potentially affect stock price. Very importantly, for an organization that operates within the government contracting space like ours, it can impact organizational reputation with current and prospective clients.
How can an organization guard against the destructive effects of deepfakes? What measures does ActioNet have on the ground to prevent and mitigate deepfake threats?
ActioNet realizes that humans are the weakest link in the security chain simply because, unlike tools and tech, they have feelings and those feelings can be appealed to, even exploited. This is why social engineering precedes most successful cyber-attacks. Employee training and awareness is a veritable first line of defense when it comes to deepfakes and ActioNet has one of the most robust security training and awareness programs. Deepfakes awareness and mitigation content is being integrated into the security and awareness training curriculum. The training also focuses on how the technology is leveraged in malicious attempts and how this can be detected. This would enable ActioNet employees to spot deepfake-based social engineering attempts. This training can be incorporated into training programs that we deliver to our customers to raise their awareness of this new threat vector and prepare their workforce to identify, report, and prevent these negative impacts.
Detecting false media early can help minimize the impact on your organization. There are a number of deepfake detection tools, many of which are open source, that can be leveraged to identify deepfakes and be integrated as part of the overall security tool suite. Upon full implementation, this will help build an anti-deepfake firewall around the your environment and its people, translating to defense against attempts by malicious actors to influence public opinion through deepfakes.
ActioNet is also incorporating a response strategy that can be set in motion when a deepfake is detected. Roles, responsibilities and required actions will be defined in this plan. The response strategy may include for example, the communications team issuing a press statement that exposes the malicious deepfake, including evidence from the software-based detection tool. The bottom line of “Seeing is believing” doesn’t apply as easily anymore. We have to question everything to counter the ever-evolving threats and new tools being developed.
Finally, good processes are immensely helpful in stopping the negative impacts of Deepfakes. ActioNet has ISO 20000, ISO 27001, ISO 9001:2015, CMMI-DEV® Level 4, and CMMI-SVC® Level 4 certifications and vast experience helping agencies improve their security posture, processes, and procedures. If you have any security needs, ActioNet can help.