TRENDING

6/recent/ticker-posts

State Hackers Turn Google AI Into Attack Acceleration Tool, Report Finds

Illustration of a hacker using artificial intelligence software on a computer screen to plan cyberattacks.
Google says state-backed cyber groups used its Gemini AI tool to support phishing campaigns, reconnaissance, and malware development. The company has disabled related accounts and tightened security controls.

By Precious E.

State hackers have turned Google AI into an attack acceleration tool, according to a new threat assessment released Thursday by Google’s security team.

The report says government-linked cyber groups from China, Iran, and North Korea used Google’s Gemini model to assist in reconnaissance, phishing campaigns, and malware development. Google said it has shut down accounts connected to the activity and strengthened internal safeguards.

Security analysts say the findings show how artificial intelligence is becoming embedded in modern cyber operations.

State Hackers Turn Google AI Into Attack Acceleration Tool Across Campaign Stages
Google’s Threat Intelligence Group detailed how different state-backed actors incorporated Gemini into various phases of their operations.

A China-linked group known as APT31 reportedly used structured prompts to have Gemini analyze vulnerabilities, including remote code execution flaws and SQL injection test results tied to U.S. targets. Another China-based actor, UNC795, used the model to troubleshoot code and assist in building automated auditing tools.

Iranian group APT42 used Gemini to research targets and draft tailored phishing messages. According to the report, the group fed the system biographical details of potential victims and asked it to generate engagement strategies.

North Korean operators tracked as UNC2970 relied on Gemini to collect open-source intelligence, focusing on cybersecurity and defense firms. The activity included gathering information about technical roles and salary ranges.

Google also said an unattributed group, UNC6418, used the model to identify email addresses and account credentials before launching phishing attacks connected to Ukraine.

The report indicates that artificial intelligence is not creating entirely new attack methods. Instead, it is helping threat actors complete tasks more efficiently.

AI-Generated Malware and Phishing Tools
Beyond nation-state groups, criminal networks have also begun testing AI systems in their operations.

Google identified malware that communicates with Gemini’s application programming interface to request and compile malicious code. One framework, tracked as HonestCue, generates C# source code through Gemini and executes it in memory.

A phishing kit called CoinBait was also linked to AI-assisted development. The kit impersonates a cryptocurrency exchange to steal login credentials. Google assessed that some of this activity overlaps with a financially motivated threat cluster known as UNC5356.

Investigators also observed campaigns abusing public AI-sharing features to host deceptive instructions designed to trick users into installing malware.

Attempts to Extract AI Model Data
The report highlights attempts to extract information from Gemini itself through what are known as model distillation attacks.

In one case, more than 100,000 prompts were sent in an apparent effort to force the system to reveal detailed reasoning patterns. Google said this type of activity is aimed at replicating proprietary model capabilities rather than targeting everyday users.

The company said risks from these extraction efforts are concentrated on AI developers and service providers, not the general public.

The finding that state hackers turn Google AI into an attack acceleration tool reflects a broader shift in cyber conflict. Artificial intelligence is increasingly being treated as a productivity layer that supports traditional hacking techniques.

Governments and private companies are investing heavily in AI platforms for business and research. As adoption grows, security teams are under pressure to prevent misuse without limiting legitimate access.

Google said it has removed accounts and infrastructure tied to the abuse and updated detection systems within Gemini.

The company’s report concludes that although AI has not reshaped the threat landscape, the use of such tools by state hackers is likely to expand. For cybersecurity professionals, the warning that state hackers are turning Google AI into an attack acceleration tool signals a new phase in digital espionage and cybercrime operations.

Post a Comment

0 Comments