Iranian based hackers are manipulating Google’s AI chatbot, Gemini, to gather intelligence and bolster their attacks. It is no surprise that hackers are capitalising on AI tools for no good ends. 

Google’s Threat Intelligence Group published a report detailing how hacking networks stretched across the world are using Google’s product to create content to influence locally, research and orchestrate computer network attacks like phishing. 

Although Iranian bad actors were among the “heaviest” group alone to use Gemini, fraudsters’ techniques could be identified in the report across 57 nations including China, Iran, North Korea, and Russia.

Access to the global AI tool by Google enables them to experiment with the technology to fuel their own operations and find loopholes to generate more productivity, as we know AI is a double-edged sword. As of yet, the report ascertained that hackers have not developed “novel capabilities” from Gemini but good AI developers do not have long. The availability of efficient and advanced AI models serves criminals as well as boosts innovation. 

30% of Gemini use by hackers could be traced to Iran.

As new high-profile AI models are debuted in the mainstream, namely DeepSeek, there is concern how many will be the targets of hackers for a variety of tasks including researching military targets and creating troubleshooting codes for malware. DeepSeek R1 has rattled competitors’ market rankings in terms of offering similar AI capabilities at a fraction of the cost.

Chinese hacking groups in particular researched US IT and military organisations and tried to tap information about US intelligence agencies. 

US developers will be on high alert despite “no new threats” emerging for now. The report which details hackers by region uncovers lots of scheming including how to launch Gmail phishing attacks and bypass verification methods.