A suspected North Korean state-sponsored hacking group used ChatGPT to create a deepfake of a military ID document to attack a target in South Korea, according to cybersecurity researchers.
Attackers used the artificial intelligence tool to craft a fake draft of a South Korean military identification card in order to create a realistic-looking image meant to make a phishing attempt seem more credible, according to research published Sunday by Genians, a South Korean cybersecurity firm. Instead of including a real image, the email linked to malware capable of extracting data from recipients’ devices, according to Genians.
The group responsible for the attack, which researchers have dubbed Kimsuky, is a suspected North Korea-sponsored cyber-espionage unit previously linked to other spying efforts against South Korean targets. The U.S. Department of Homeland Security said Kimsuky "is most likely tasked by the North Korean regime with a global intelligence-gathering mission,” according to a 2020 advisory.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.