01 logo

Hackers Exploit Gemini AI to Enhance Cyberattacks

Google AI Hacked

By WIRE TOR - Ethical Hacking ServicesPublished 12 months ago 4 min read

In a startling revelation, Google’s Threat Intelligence Group (GTIG) has uncovered a disturbing trend in which hackers, including state-sponsored threat actors, are leveraging the Gemini AI assistant to boost their attack operations. While the AI-powered Gemini assistant was developed to improve productivity and support legitimate use cases, Google has discovered that it is increasingly being repurposed by cybercriminals and advanced persistent threat (APT) groups for nefarious activities such as reconnaissance, vulnerability research, and post-compromise actions.

Gemini, part of Google’s advanced AI tool suite, has attracted attention from government-linked APT groups across the globe. According to GTIG, these threat actors are not using Gemini to develop groundbreaking AI-enabled cyberattacks capable of bypassing traditional defenses, but instead, they are capitalizing on the tool’s efficiency to enhance their existing operations. The use of Gemini primarily focuses on cutting down the preparation phase of cyberattacks, enabling hackers to expedite their tactics and strategies, increasing their productivity and reducing the time required to launch attacks.

APT Groups Experimenting with Gemini

Over the past few months, Google has identified activity linked to Gemini associated with APT groups from over 20 countries. However, some countries have been particularly notable for their engagement with this AI tool, such as Iran and China. These APT groups have found Gemini to be invaluable for a variety of tasks, including assistance with coding, vulnerability research, and reconnaissance. Google’s investigation into Gemini’s use by these groups sheds light on how the tool is aiding hackers in researching target organizations, developing malicious scripts, and even attempting to evade detection mechanisms.

How Gemini Helps Hackers: A Closer Look at APT Activities

Iranian Threat Actors Iranian hackers have been the heaviest users of Gemini, exploiting its potential for a range of activities. The tool has been employed in reconnaissance efforts aimed at defense organizations and international experts, assisting hackers in researching publicly known vulnerabilities and developing phishing campaigns. Gemini’s assistance has also extended to content creation for influence operations, particularly in areas related to cybersecurity and military technologies. Iranian actors have utilized Gemini to gain insights into complex systems such as unmanned aerial vehicles (UAVs) and missile defense systems.

Chinese-Backed APTs Chinese threat actors have used Gemini primarily for reconnaissance of U.S. military and government organizations, scanning for vulnerabilities and planning lateral movement tactics within targeted networks. Gemini has also been used for scripting to escalate privileges and maintaining persistence within compromised environments. A particularly concerning activity involved Chinese-backed hackers exploring methods to exploit Microsoft Exchange via password hashes and attempting to reverse-engineer security tools like Carbon Black EDR.

North Korean APTs North Korea’s state-sponsored hackers have utilized Gemini to support various phases of their attacks. Their use of the tool includes researching free hosting providers, conducting reconnaissance on target organizations, and developing malware. One of the more unusual applications of Gemini involved supporting North Korea’s clandestine IT worker scheme. Hackers employed Gemini to draft job applications, cover letters, and proposals to secure positions at Western companies under false identities, highlighting an innovative and manipulative use of the AI tool in social engineering schemes.

Russian Threat Actors Though Russian hackers have engaged with Gemini, their usage has been comparatively minimal. The activities observed include using Gemini for scripting assistance, translation, and payload crafting. Russian actors have used Gemini to rewrite publicly available malware into new programming languages, adding encryption capabilities to malicious code. The limited use of Gemini by Russian groups may point to their preference for domestically developed AI models or a conscious effort to avoid Western AI platforms for operational security reasons.

Security Bypasses and Jailbreaks

In addition to its varied use in attack planning, Google has also observed that some threat actors attempted to bypass Gemini’s security measures through public jailbreaks and creative prompt rephrasing. These attempts, however, were reportedly unsuccessful. While Gemini’s security measures have been robust enough to prevent many of these tactics, such efforts highlight the evolving challenge in securing AI models from abuse by malicious actors.

The increasing abuse of generative AI tools by threat actors is not unique to Gemini. OpenAI’s ChatGPT, another popular AI tool, also faced similar concerns in October 2024 when OpenAI disclosed the misuse of its platform by hackers. Google’s disclosure serves as confirmation that the misuse of generative AI tools by APT groups is not isolated but rather part of a larger trend.

Vulnerable AI Models and the Rise of Abuse

While some AI models, like Gemini, have been built with security measures to prevent misuse, the cybersecurity landscape is becoming crowded with AI platforms lacking sufficient safeguards. Unfortunately, many of these newer models have restrictions that are easily bypassed by skilled attackers, contributing to the rise in AI abuse. In recent reports, cybersecurity intelligence firm KELA highlighted the lax security measures of models like DeepSeek R1 and Alibaba’s Qwen 2.5, which are vulnerable to prompt injection attacks. These models offer a streamlined approach for malicious actors to manipulate the AI systems and turn them into tools for cyberattacks.

Moreover, Unit 42 researchers have demonstrated effective jailbreaking techniques against both DeepSeek R1 and V3, illustrating just how easy it can be for threat actors to manipulate AI systems and exploit their vulnerabilities for malicious purposes.

Conclusion

Google’s revelation of the abuse of its Gemini AI tool by state-sponsored APT groups serves as a wake-up call to the cybersecurity community. While AI technology holds immense potential for positive applications, its misuse by cybercriminals and threat actors is becoming a significant concern. As AI models continue to proliferate, there is an urgent need for better safeguards and more robust security protocols to prevent their exploitation.

For cybersecurity professionals, this marks a new frontier in defending against AI-enabled attacks, as the very technology designed to protect can be repurposed by adversaries to further their malicious goals. As AI tools become increasingly integral to both attack and defense, the cybersecurity industry must adapt and evolve to address this growing threat.

cryptocurrencycybersecurityfuturehackershistory

About the Creator

WIRE TOR - Ethical Hacking Services

WIRE TOR is a Cyber Intelligence Company that Provides Pentest & Cybersecurity News About IT, Web, Mobile (iOS, Android), API, Cloud, IoT, Network, Application, System, Red teaming, Social Engineering, Wireless, And Source Code.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.