Google has revealed that its flagship AI model, Gemini, was targeted in a large-scale attempt to copy how it thinks.
According to the company, attackers submitted more than 100,000 carefully designed prompts in what experts describe as a “model extraction” or “distillation” attempt, a technique used to reverse-engineer artificial intelligence systems by analysing their responses.
The incident shows that as AI models become more powerful and valuable, they are also becoming prime targets for intellectual property theft.
What Really Happened?
Google’s security teams detected unusual activity involving repeated, structured prompts sent to Gemini. These weren’t normal user questions like “write me an email”. Instead, the prompts were strategically crafted to uncover how Gemini reasons, breaks down problems, and generates responses.
Over time, collecting thousands or in this case, over 100,000 outputs can allow someone to build a dataset large enough to train a separate AI system that behaves similarly to the original.
This method does not require breaking into Google’s servers or stealing source code. It relies entirely on interacting with the AI through its normal interface.
Google says it flagged and blocked the suspicious activity before any significant damage was done.
What Is Model Extraction?
Model extraction is a growing concern in the AI industry. Unlike traditional software, large language models like Gemini are accessible through APIs and chat interfaces. That accessibility is essential, it’s what makes them useful. But it also creates a vulnerability.
Here’s how it works:
An attacker sends thousands of targeted prompts.
They collect and analyse the responses.
They use those responses as training data.
They build a “student model” that mimics the original system’s behaviour.
This process is sometimes called “distillation” because it extracts the essence of how a model behaves.
For companies like Google, OpenAI, Anthropic, and others, this represents a serious competitive threat. Developing advanced AI systems costs billions of dollars in research, computing power, and talent. If another organisation can recreate similar behaviour at a fraction of the cost, it undermines that investment.
Why This Matters Beyond Google
This isn’t just a Google problem. It signals a broader shift in how AI systems are being targeted.
1. AI Is Now Intellectual Property Gold
Modern AI models are among the most valuable digital assets in the world. Their reasoning patterns, training techniques, and architecture are closely guarded trade secrets.
Attempts to clone or approximate them could reduce the competitive advantage of companies leading the AI race.
2. Traditional Cybersecurity Rules Don’t Fully Apply
In most cyberattacks, hackers try to break into a system. In model extraction, the attacker doesn’t break in; they stay within the rules of access. They simply use the system in a systematic, strategic way.
That makes detection more complex. Security systems must now distinguish between a power user and a malicious actor conducting extraction.
3. Smaller Companies May Be at Greater Risk
If a well-resourced company like Google faces this type of attempt, smaller AI startups could be even more vulnerable.
Many startups train specialised AI models on proprietary business data, research, or financial forecasting. If extracted, those models could lose their uniqueness overnight.
How Google Responded
Google says its threat intelligence and monitoring systems detected the abnormal usage patterns early.
The company identified repeated, structured prompts designed to probe Gemini’s reasoning logic and blocked the associated accounts.
While Google did not disclose who was behind the attempt, reports suggest that such activities are often commercially motivated rather than state-sponsored.
Google is now strengthening its detection systems to better identify behaviour that resembles model extraction. This could involve monitoring prompt patterns, usage frequency, and structured querying behaviour that deviates from typical user activity.
Importantly, there is no evidence that Gemini’s core systems were breached or that user data was compromised.
The AI Race Is Now Also a Security Race
The race in artificial intelligence is no longer just about who builds the smartest model. It is also about who can protect their model from being copied.
Google’s disclosure about the 100,000-prompt attempt to clone Gemini shows that the next phase of AI competition will involve not just innovation, but defence.
As AI becomes central to business, media, education, and everyday life, protecting the intelligence behind these systems may become one of the industry’s most critical challenges.