Tech

Russia, China, North Korea, and Iran Used GPT for 'Malicious Cyber Activities', OpenAI Says

OpenAI shut down accounts affiliated with multiple “state-affiliated threat actors." Microsoft said that no “significant” cyber campaigns were carried out.
openai and microsoft logos
Image: Getty Images

OpenAI shut down multiple accounts affiliated with the governments of China, Russia, Iran, and North Korea, which it said were attempting to use its AI chatbot services “in support of malicious cyber activities,” the company announced in a blog post on Wednesday. 

The announcement came as a result of OpenAI’s collaboration with Microsoft Threat Intelligence. Microsoft stated in its blog post that it had not found evidence of these actors having carried out any significant cyberattacks, but that much of its findings were “representative of an adversary exploring the use cases of a new technology.” The state-linked groups used AI’s services for research into companies and intelligence agencies, translation, generating content for hacking campaigns, and simple coding tasks. Of OpenAI's products, ChatGPT and Whisper—its translation tool—match these use-cases. 

Advertisement

“Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely,” Microsoft stated. “At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.”

Microsoft Threat Intelligence identified accounts belonging to Forest Blizzard, a Russian state-affiliated threat actor, which used OpenAI to conduct research into satellite communication protocols and radar technology, as well as help with coding. Microsoft wrote in a blog post that Forest Blizzard has been “extremely active” surrounding the War in Ukraine, and that its operations “play a significant supporting role to Russia’s foreign policy and military objectives both in Ukraine and in the broader international community.” 

Forest Blizzard is Microsoft's name for Russia's notorious Unit 26165, also known as Fancy Bear or APT28, which has long targeted journalists, governments, and organizations around the world. In 2016, Fancy Bear hackers contributed to the election’s chaos by breaking into the Democratic National Committee. 

“Forest Blizzard’s use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations,” Microsoft stated. “Microsoft observed engagement from Forest Blizzard that were representative of an adversary exploring the use cases of a new technology. As with other adversaries, all accounts and assets associated with Forest Blizzard have been disabled.”

Microsoft also identified two Chinese-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon, which OpenAI said used its services to debug code and translate technical papers. Both are well-known hacking groups that are also known as Aquatic Panda and Maverick Panda. Charcoal Typhoon, which Microsoft wrote largely focuses on Taiwan, France, and other institutions that oppose China’s policies, was using ChatGPT’s large language model to conduct “limited exploration of how LLMs can augment their technical operations,” including “generating content that could be used to social engineer targets.” Microsoft said Salmon Typhoon’s activity mirrored that of a public search engine. 

The North Korean threat actor was Emerald Sleet—also known as Kimsuky—which used OpenAI to generate code and content for phishing attacks. It additionally “identif[ied] experts and organizations focused on defense issues in the Asia-Pacific region,” the blog post stated. Kimsuky is a hacking group which the Cybersecurity and Infrastructure Security Agency says focuses its intelligence collection on South Korea, Japan, and the United States. In addition, Iran-based Crimson Sandstorm was identified as using OpenAI to generate phishing content and code, as well as researching how malware could evade detection. 

OpenAI said the activity it found was consistent with external assessments that its GPT-4 could only offer limited help with “malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.” All discovered accounts had been disabled, it said.

“The vast majority of people use our systems to help improve their daily lives,” OpenAI’s post read. “By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”