Microsoft’s AI Tools Detected Hackers from China, Russia, and Iran, says Company

0
44

A recent report revealed that state-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to improve their hacking skills. These hackers have been using large language models to enhance their spying capabilities, drawing concern from cybersecurity officials in the West.

Microsoft announced a ban on state-backed hacking groups from using its AI products, stating that they do not want threat actors to have access to this technology. The company tracked hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments as they attempted to perfect their hacking campaigns using AI tools.

The report described how these hacking groups used the large language models differently. Russian hackers used the models to research military operations in Ukraine, while North Korean hackers used it for spear-phishing campaigns. Iranian hackers utilized the models to write more convincing emails, and Chinese hackers experimented with asking questions about rival intelligence agencies and notable individuals.

Although officials did not disclose the volume of activity or the number of accounts suspended, they emphasized the novelty and power of AI technology, which has raised concerns about its potential for abuse. The ban on hacking groups using AI technology extends to Microsoft offerings, with the company emphasizing the need for safe and responsible deployment of AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here