Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks and hone their craft

TECH STARTUPS

Microsoft and OpenAI disclosed today that state-sponsored hackers are using advanced language models such as ChatGPT to enhance their cyberattacks. The joint research revealed instances of Russian, North Korean, Iranian, and Chinese-backed groups utilizing tools like ChatGPT for target research, script improvement, and the development of social engineering techniques.

This groundbreaking research, detailed on both companies’ websites, exposes how hackers affiliated with foreign governments are incorporating generative artificial intelligence into their attacks. Microsoft specifically highlighted the use of OpenAI’s technology by five hacking groups associated with China, Russia, North Korea, and Iran.

Contrary to concerns in the tech industry about AI generating exotic attacks, hackers are employing it for more mundane tasks like drafting emails, translating documents, and debugging code, according to the companies.

OpenAI, in collaboration with Microsoft Threat Intelligence, also reported disrupting five state-affiliated actors attempting to leverage AI services for malicious cyber activities.

“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities. We also outline our approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities,” OpenAI said.

in a blog post, Microsoft said: “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.” Tom Burt, who oversees Microsoft’s efforts to combat major cyberattacks, added, “They’re just using it like everyone else is, to try to be more productive in what they’re doing.”

The Strontium group, linked to Russian military intelligence, was found using large language models (LLMs) for understanding satellite communication protocols, radar imaging technologies, and technical parameters. LLMs were also employed for basic scripting tasks like file manipulation and data selection.

Microsoft added that cybercrime groups and state-sponsored actors are exploring and testing various AI technologies to understand their value for operations and identify potential security controls to bypass.

“Is it providing something new and novel that is accelerating an adversary, beyond what a better search engine might? I haven’t seen any evidence of that,” said Bob Rotsted, who heads cybersecurity threat intelligence for OpenAI.