Digital Forensics Techniques to Detect Deepfakes
By Jordan Cortado on October 11, 2024
Introduction
With the advancement of Artificial Intelligence (AI), digital manipulation tool sets have advanced into unprecedented levels. One of the most apparent are deepfakes, they have emerged as a significant threat in both the media and cybersecurity landscapes. A deepfake is technology that leverages AI to create highly realistic yet fabricated video, image, and audio content [2]. From political disinformation to cybercrime, the consequences of deepfakes are far-reaching. Beyond their societal impact lies an even more complex issue, their detection and analysis. As AI-generated forgeries grow increasingly indistinguishable from authentic content, digital forensics investigators face a daunting task in keeping pace with deepfake technology.
Deepfake Overview
Deepfake derives from AI’s “deep learning” and the word “fake” [3]. The application of deepfake technology can be illicit (scams, cybercrime, election manipulation, automated disinformation attacks) or acceptable (historical recreation, parodies, technological demonstration) [9]. To create a deepfake, two datasets are needed. A dataset for the source material and a dataset for the target. Deep learning in AI systems is then used to analyze and understand the target’s facial characteristics, features, movements, and expressions. This is then paired with the dataset in which the target’s image/video/voice is to be replicated [3]. Merging these datasets creates the desired output that looks natural and realistic.
Deepfakes Challenge to Digital Forensic Investigators and Cybersecurity
The major challenge forensic investigators face is the fast-growing sophistication in deepfakes. The more accurate deepfakes become, the harder they will be to identify and detect. On top of that, deepfake creation is also becoming uncomplicated. Hany Farid, a digital media forensics specialist, displayed a sneak peek into an unavailable software that creates a full blown deepfake using just a single image and an audio track [4]. Today, deepfakes are a threat to our public safety, society, and national security. As a result, the improving realism of deepfakes is a global threat and leaves society to question “what is real?” and “what is fake?”
Another challenge that deepfakes pose is the implementation of deepfake with cybercrimes. With deepfakes increasingly becoming more realistic, a deepfake with audio and visuals paired with social engineering will likely be more believable to people. The alarming thought is when criminals master mass producing AI generated deepfakes and fuse them with mass cybercrime-as-a-service [7]. Then cybersecurity professionals will need to find an answer in mitigating this threat.
Detection on AI generated media and Deepfakes
Digital Forensic investigators rely on several techniques to single out deepfakes. The easiest distinction is when AI-generated media displays physical inconsistencies. Investigators will analyze the visual/audio and look for any discrepancies in human anatomy [3, 4, 5]. AI has a harder time depicting certain complex parts of the body [5]. Some common indicators would be to look out for inconsistencies in the hands, ears, teeth, elbows, and toes.
Another technique would be to analyze contextually. In other words, examine the logic of the deepfake. Objects outside of the focal deepfake should obey the laws of physics and make contextual sense [5]. Key characteristics to look at this point would be the shadows, architecture, and image quality.
Deepfake videos with audio are the hardest to identify. Due to the nature of the human brain, people tend to overlook miniscule discrepancies and focus more on the main idea of the video. To combat this, digital forensic experts can apply a multi-modal analysis [2]. This is the idea of analyzing multiple data sources and combining techniques. [1, 2, 5] These techniques include:
- Frame by Frame Analysis
- Breaking down the video into individual frames to catch deviations between them.
- Blending Analysis
- Closely related to edge analysis. Be on the lookout for any variance in color and texture in the target’s deepfaked face.
- Blink Analysis
- One of the hardest actions a gen-AI can emulate is eye blinking. Most of the time, there is not enough data included in a dataset with eyes closed. The key indicators in blink analysis are unnatural blinking patterns.
- Edge Analysis
- Analyzes the borders and contours between the deepfake and real elements in the video. Possible indicators are inconsistent pixelated edges and lighting irregularities.
- Error Level Analysis
- Examines the variances in the pixels of a potential deepfake. This is a version of reverse engineering that examines the pixels of chunks of the video/image and puts them back together to discover any variances.
- Speed Analysis
- A technique that compares the cadence in voice and the way the lips move. Usually AI has a hard time, syncing the two and has only truly mastered the way that lips look when speaking.
- Luminance Gradient Analysis
- When placing a face from one environment onto another, AI can change the intensity, reflection, direction of the light. Look for any inconsistencies in these.
After conducting several analyses using necessary tools, a decision can be made based on any discrepancies found.
Fighting Fire with Fire
There are a number of different tools to combat deepfakes. The most prominent are AI based detection systems. AI-based detection systems utilize machine learning in order to inspect the authenticity of digital media [6]. At the University of Buffalo, they created the DeepFake-o-meter, a free open source platform that uses AI algorithms to decipher a deepfake. The machine automates the multimodal approach. By looking at a video frame by frame, Deepfake-o-meter’s algorithm distinguishes successful and unsuccessful screen captures and then gives out a probability output of its authenticity. DeepFake-o-meter is able to combine multiple algorithms and processes in order to come to a conclusion of the media’s authenticity [8]. Other recognized AI-based detection systems are Microsoft Video Authenticator, FaceForensics++ and DeepwareScanner [6].
Conclusion
As deepfake technology continues to rapidly advance, it presents a growing challenge for digital forensics, cybersecurity, and society as a whole. The ability of gen-AI to produce hyper-realistic, manipulated media threatens the trust in digital media. As deepfakes continue to evolve, so too must the strategies used to point out fabricated media and maintain the integrity of digital media.
References
[1] Amerini, I., Barni, M., Battiato, S., Bestagini, P., Boato, G., Bonaventura, T. S., Caldelli, R., Vitulano, D., Villari, M., Tonti, C. M., Tubaro, S., Salvi, D., Perazzo, P., Ortis, A., Orrù, G., Montibeller, A., Micheletto, M., Marcialis, G. L., Mandelli, S., … De Natale, F. (2024, August 1). Deepfake Media Forensics: State of the Art and Challenges Ahead. arxiv. https://arxiv.org/html/2408.00388v1#:~:text=Deepfakes%20leverage%20deep%20learning%20techniques,to%20their%20potential%20for%20misuse.
[2] Blanchfield, D. (2023, July 11). Deepfake Technology and its Impact on Digital Forensics. Elnion. https://elnion.com/2023/07/11/deepfake-technology-and-its-impact-on-digital-forensics/
[3] Didit News. (2023, December 20). Deepfakes Explained: Creation, Risks, and Protection in 2024. Humanizing the Internet. For Everyone, Everywhere. https://didit.me/blog/deepfake-what-it-is-how-it-s-created-and-why-you-should-be-cautious#how-are-deepfakes-created
[4] Farid, H., & UC Berkeley. (2024, September 26). Digital forensic expert breaks down political deepfakes | Academic Review. YouTube. https://www.youtube.com/watch?v=tVWRfFY9KPA
[5] Nguyen, K., & ABC News. (2024, September 25). Digital forensics expert breaks down how to spot AI-generated “people” | ABC News Verify. YouTube. https://www.youtube.com/watch?v=l3j-vVLhZd8
[6] Rawal, S. (2024, September 11). Deepfakes: Comprehensive Analysis, Challenges, Mitigation, and Forensic Investigation. LinkedIn. https://www.linkedin.com/pulse/deepfakes-comprehensive-analysis-challenges-mitigation-shaurya-rawal-i9y7f
[7] Townsend, K. (2024, September 17). The AI threat: Deepfake or Deep Fake? Unraveling the True Security Risks. SecurityWeek. https://www.securityweek.com/the-ai-threat-deepfake-or-deep-fake-unraveling-the-true-security-risks/
[8] WebsEdge Science. (2024, February 15). Building the DeepFake-o-Meter: Algorithms to Expose Digital Forgeries – UB Media Forensic Lab. YouTube. https://www.youtube.com/watch?v=HkxXt9Xw6rA
[9] What is Deepfake: AI Endangering your Cybersecurity?. Fortinet. (n.d.). https://www.fortinet.com/resources/cyberglossary/deepfake