Stable Diffusion Forensic Analysis
By Kevin Lanier on February 6, 2025
Executive Summary
Stable Diffusion is a program which can use artificial intelligence to generate images based on the prompts the user enters into it. Since its launch in August 2022, AI generated images have flooded the internet and pose a very real cybersecurity threat to employees if proper safeguards aren’t implemented. For example, an AI generated image could be used to trick an employee into revealing sensitive information if they think the image verifies the identity of the sender. Companies should conduct regular training sessions on AI-generated image risks, encourage reverse image searches, provide access to AI image detection tools and more to help combat stable diffusion-focused cybercrime.
Background
AI-generated images pose the risk of tricking employees. For example, according to a study from Cornell University there was a 38.7% misclassification rate among the study’s participants when they were shown both real and AI-generated images. [1] Scammers are able to enter prompts into stable diffusion to generate an image which fits a precise outcome. For example, if an employee asks to verify a user with a picture of them holding something up, a scammer could potentially generate an image for that exact situation that an inattentive employee could fall for. If nearly 4/10 images are misjudged as real or fake from the sample group, then who is to say a company’s workforce couldn’t have a similar rate of failure and succumb to social engineering? According to data from Positive Technologies, 65% of successful cyberattacks on financial organizations incorporated phishing and social engineering. [2] With such a large rate of success, companies will need to take AI generated images seriously for their potential use by cybercriminals.
Impact
Scammers who use AI-generated images for social engineering have the potential to steal money and sensitive information from the corporations they target. In 2023, the total financial impact of social engineering attacks worldwide was estimated to be around $4.2 billion. [3] With such a significant financial impact, the possibility of AI-generated images contributing toward the most successful cyber attack method on financial organizations should not be taken lightly. Scammers can also steal sensitive information from organizations through their social engineering. For example, prompts could be entered into Stable Diffusion to create a fake image of an employee’s coworker to make it look like they’re attending a company event. An email could be sent out using the image to try and mask their phishing attempt as well as a link for employees to enter sensitive information such as login credentials or social security numbers.
Mitigation
As mentioned, companies will need to adopt mitigation strategies to combat this new form of social engineering. Incorporating knowledge about AI-generated images should be incorporated into employee training and awareness to help combat the possibility of fake images being mistaken for real ones and vice versa. For example, employees could be trained on noticing the telltale signs of an AI-generated image such as an unusual number of fingers, unnatural lighting, or inconsistencies in facial features (e.g., mismatched earrings, asymmetrical eyes.) AI detection tools could also be implemented such as the hierarchical multi-level approach, a detection tool that analyzes the image in multiple stages. It’s like a multi-layered security check at an airport where each level inspects different aspects of a passenger’s identity, but for images instead. A Cornell University study found that the hierarchical multi-level approach demonstrated over 97% classification accuracy in differentiating real images from those generated by various AI architectures such as Generative Adversarial Networks (GANs) and Diffusion Models. [4]
Relevance
Since its launch in August 2022, Stable Diffusion has undergone several updates that improved its image generation accuracy. With the advancement of AI, generated images will only become more and more convincing to the naked eye. This is why it’s so important for organizations to have proper mitigations in place and to always update these mitigations to incorporate advancements in technology. AI-generated images will likely reach a state where the telltale signs are no longer present, so organizations which have already adopted AI detection software will be better prepared for the threat moving forward.
References
[1] Yuan, Y., Zhang, Z., Wang, Y., & Wang, S. (2023, April). Seeing is not always believing: Benchmarking human and model perception of AI-generated images. arXiv. https://arxiv.org/abs/2304.13023
[2] Positive Technologies. (2024, June). Financial industry cyberthreats: H2 2023 – H1 2024. Positive Technologies. https://global.ptsecurity.com/analytics/financial-industry-cyberthreats-h2-2023-h1-2024
[3] Madrekar, A. (2024, October 7). Social Engineering Statistics By Types, Companies In United State and Country. Sci-Tech Today. https://www.sci-tech-today.com/stats/social-engineering-statistics/
[4] Gragnaniello, D., Cozzolino, D., Poggi, G., Verdoliva, L., & Sansone, C. (2023, March). Level up the deepfake detection: A method to effectively discriminate images generated by GAN architectures and diffusion models. arXiv. https://arxiv.org/abs/2303.00608