I will talk about the dark side of LLMs in this post. Starting from SEO poisoning to GEO manipulation.
The growth of generative AI and Large Language Models (LLMs) is rapidly changing the way people search for information. Where we once relied on Google results, more users ask ChatGPT, Claude, or Perplexity: “What is the login page for my bank?” or “Where can I reset my account?”
Kevin van Beek, specialist in SEO and AI search, sees every day how this shift creates opportunities with a less discussed downside. What looks efficient can ultimately open the door to deception and phishing through AI-generated search results.
Table of Contents
When AI Recommends the Wrong Domains and Content
LLMs frequently return information from incorrect or even non-existent domains in response to simple login questions. Think of prompts like: “I lost my bookmark, can you give me the official login page of brand X?”
The results are worrying:
- A significant portion of the domains mentioned by LLMs were not registered or had no content.
- Some references pointed to existing but unrelated companies.
- Many domains could be immediately purchased and abused by malicious actors.
This means that a user looking for the correct login page could unknowingly be sent by AI to a phishing site.
From SEO Poisoning to GEO Manipulation
In my work as an SEO specialist, I increasingly see businesses turning to questionable optimization tactics to be cited by LLMs like ChatGPT. Often, these efforts are well-intentioned and to be fair, they can work very well. But some businesses push “GEO manipulation” much further, using tactics such as:
- Embedding explicit prompt-like instructions in content
- Hiding text outside the viewport or beneath other elements
- Undermining competitors at scale through listicles or community platforms like Reddit
- Inflating authority claims across multiple pages or external sources
These methods may seem effective in the short term, but they carry real risks. Importantly, they are not designed to mislead human visitors directly. The goal is to inject false or inflated signals into LLMs, which then repeat those claims back to audiences as if they were trustworthy.Â
On the technical side, you may be delisted from LLM results altogether. On the brand side, the damage can be even worse: audiences may lose trust if they notice inflated claims or systematic competitor discreditation. Once credibility slips, it’s a difficult slope to climb back from.
Turning opportunities into vulnerabilities
Strong visibility in Google is no prerequisite for visibility in AI search results of these models. For some brands this is an opportunity, but it also exposes something more troubling: the quality and trust signals of LLMs are currently far lower than in Google.
In LLMs like ChatGPT, you see clear differences compared to Google. The model regularly cites content from domains that are barely or no longer visible in Google. These include small sites with little authority, sites hit by updates, or sources with low E-E-A-T that Google would normally filter out.
On the surface, this seems like a positive shift: AI search doesn’t strictly mirror Google’s gatekeeping, giving smaller players a chance to be heard. But once those weak points are continuously and deliberately exploited, an opportunity quickly turns into a vulnerability.Â
LLMs as Easy Targets for DeceptionÂ
And this is where the line moves to the dark side. The vulnerabilities that companies sometimes see as opportunities are used by malicious actors as weapons, with phishing and deception as the direct outcome.
Where Google strictly filters for reliability, LLMs often present unknown or questionable sites as if they were authoritative. Sometimes even with hallucinated or phishing-like links, delivered with great confidence.
Attackers actively exploit this by publishing AI-optimized content, for example through GitHub projects, manuals, or blog posts. In this way they give fake domains a false aura of legitimacy in the data sources that LLMs draw from. These domains then appear confidently in AI answers, presented to unsuspecting users as if they were reliable.
Companies that exploit this vulnerability may be visible in the short term, but in the long term they risk disappearing once LLMs tighten their quality standards. History shows that spam-like tactics, which were heavily penalized by Google algorithm updates, will also have a limited shelf life in LLMs.
This danger is no longer hypothetical
There have already been campaigns where thousands of AI-generated phishing pages were deployed, for example in the crypto sector, banking, or travel. These sites look professional, load quickly, and are optimized for both humans and machines. Exactly the kind of content that AI models are inclined to classify as trustworthy.
In such scenarios, phishers don’t even need to lure users through ads or search results, because the AI itself recommends their domain.
Why This Is Extra Dangerous
The risks are twofold:
- Trust in AI: users experience AI answers as direct, clear, and reliable. A malicious link is therefore more likely to be clicked.
- Visibility: AI answers are often placed at the top of search engines and thus get priority over regular search results.
Strengthening Against AI-Driven Phishing
- For LLM developers: integrate domain verification and guardrails based on official brand registries and trusted sources. This helps prevent models from recommending unverified or spoofed domains.
- For brands: proactively register high-risk lookalike domains, monitor the links that LLMs surface in responses, and work with threat intelligence providers to detect and remove malicious domains early.
From Opportunity to Risk
The shift from search engines to AI models creates real visibility opportunities, but it also expands the risks. As an SEO specialist, I work daily on AI search visibility for my clients, from Google’s AI Overviews to models like ChatGPT. My goal is to build sustainable SEO growth strategies that avoid short-term, spam-like tactics: approaches that not only damage reputation but also quickly stop working.
At the same time, practice shows that the same weak spots offering opportunities today are already being exploited by malicious actors. LLMs can be manipulated and may present incorrect or even dangerous results with full confidence. What looks like an innocent model error today could easily become the foundation of a large-scale phishing campaign tomorrow.
The real win is not in exploiting loopholes but in earning lasting trust, because LLMs will change while reputational scars remain.
INTERESTING POSTS
- The Biggest Challenges And Opportunities Facing Tech Businesses Right Now
- Using Deception Technology to Detect and Divert Ransomware Attacks
- 3 Genius Ways You Can Make Extra Income on the Side
- How to Improve PDF Visibility
- Don't Get Hooked: How to Spot And Stop Phishing Scams
- SEO Companies: Red Flags That You Are In The Wrong Company
About the Author:
Daniel Segun is the Founder and CEO of SecureBlitz Cybersecurity Media, with a background in Computer Science and Digital Marketing. When not writing, he's probably busy designing graphics or developing websites.