In this post, I will discuss AI vs. AI: A Modern-Day Cyber Cold War.
We’ve all heard the imminent horror stories surrounding artificial intelligence. Fears range from job losses for humans to a complete takeover by machines. However, there are upsides to the new technology.
As with any revolutionary discovery, its potential can be used for both good and evil. Nothing illustrates this polarity more clearly than the current cyber landscape. While bad actors use AI to fuel their cybercrimes, security experts are using its capabilities to better thwart these attacks.
Table of Contents
A Deeper Look at the Dichotomy of Artificial Intelligence
The use of AI to facilitate cybercrime has created a digital Cold War between cybersecurity professionals and online scammers. Criminals use the new software to plan, enhance, and execute their ploys. On the other side, the same technology is used to identify and combat these attacks.
Like the Cold War, there is an ongoing arms race between the two sides. Both are constantly pushing the tech further, and neither can afford to fall behind. A cloud of uncertainty looms overhead, as many are left questioning if the benefits of advancement outweigh the drawbacks.
How It Fuels Cybercrime
AI can be used to enhance, or even automate, every phase of an online criminal plot. Before a scheme has progressed beyond the idea phase, AI can facilitate reconnaissance. Cybercriminals can utilize AI to gather intelligence on targets and pinpoint vulnerabilities in software and security systems.
Furthermore, AI can be used for social engineering and phishing. Learning models can generate convincing phishing messages and deepfake content to pose as high-profile individuals or entirely fabricated identities.
AI can even be used to aid in the development of malware. Scammers use chatbots to develop code for malicious programs, as well as to plan ways to distribute it. These programs can mimic innocent, legitimate software, allowing them to go undetected by antivirus systems.
How It Helps Combat Cybercrime
While artificial intelligence is being used to facilitate cybercrime, it’s also being employed in efforts to combat it. Cybersecurity professionals are adopting the tool’s capabilities to detect scams earlier, collect intel, and streamline mundane processes.
Just as AI can find weak points in security systems, it can also be employed to discover suspicious network behavior. This helps business assess large quantities of traffic data and discover anomalies that can indicate looming issues like ransomware attacks and data breaches before it’s too late.
Before a threat even presents itself, AI can help companies detect vulnerabilities that could be exploited by cybercriminals. It can also automate software updates to ensure that endpoints are as secure as possible.
The AI Arms Race: Cybersecurity’s Fight to Keep Pace
A strong parallel that can be drawn between the Cold War and the current battle of AI is the need to stay ahead. Cybercriminals have consistently sought new ways to target their victims, and AI technology has only further streamlined this process.
On the flip side, AI also enables Cybersecurity experts to better prepare for cyberattacks. AI learning models can detect scam-related content, especially pieces that are fueled by the same technology. This leaves us with two sides looking to utilize and advance the same “weapon” to outpace their adversaries.
In a press release, CompTIA CEO Todd Thibodeaux explained that cybersecurity experts need to embrace the capabilities of AI. He believes that rejecting the new technology will only lead businesses to fall behind.
“AI is not just a tool—it’s a transformative force reshaping the cybersecurity workforce,” Thibodeaux said. “As we look to the future, it’s clear that professionals must embrace AI to stay competitive. From training and certifications to job roles and skills, the industry must adapt to this new reality.”
This is nothing new, both in the world of cybersecurity and in the world at large. New technologies have consistently posed a threat to cybersecurity, prompting industry leaders to continually refine their approach.
The Importance of Surveillance and Counterintelligence
Like the Cold War, both sides of the AI-cybercrime battle place a great deal of emphasis on gathering intelligence on the other. Cybercriminals utilize AI to assess security systems before launching an attack.
Additionally, the technology enables scams to adapt to human behavior rapidly, allowing criminals to circumvent both systems and the employees who operate them.
To combat this, cybersecurity teams have also employed AI in many cases. AI programs can help pick up suspicious behavior and evaluate threat risks.
Furthermore, generative AI can evaluate large quantities of scam-related data to help predict the types of attacks a network may be vulnerable to.
The Perpetuation of Misinformation
There has been a lot of unverified information spread around about the potential uses of AI. Much like the second half of the 20th century, people today are left in fear of an uncertain outcome. To make matters worse, AI is often used to generate and spread misinformation.
While AI is definitely an asset to cybercriminals, it may not be the “doomsday device” many believe it to be. Its capabilities can expedite and enhance online scams, but experts believe it hasn’t unlocked anything new.
“There is so much hype around AI, in cybersecurity and elsewhere,” said Ruben Boonen, CNE Capability Development Lead with IBM X-Force Adversary Services. “But my take is that, currently, we should not be too worried about AI-powered malware. I have not seen any demonstrations where the use of AI is enabling something that was not possible without it.”
Real-World Cases: How AI is Used for Good and Evil
To get a better understanding of the current stance of AI on both sides of the cybersecurity battle, it’s important to look at real examples from the recent past. This not only gives us insight into how both sides utilize AI, but it also helps us predict the trajectory the technology may take.
Deepfakes Fuel Phishing Attacks Targeting Major Companies
Numerous cases of AI-driven phishing scams have emerged in recent years. Generative AI can create messages, voice memos, and even videos that are extremely convincing. As you will see in the following cases, even the smallest details can make the difference between a target falling victim.
In early 2024, multinational engineering firm Arup was the target of a deepfake phishing scam. The company responsible for the Sydney Opera House had its CFO and other high-ranking executives digitally cloned to convince an employee to transfer $25 million.
The scammers set up a video conference with one of the firm’s finance workers at their Hong Kong location. The worker expressed doubts when they received an email discussing the need for a “secret transaction”. However, these reservations were quelled when he joined the conference and saw what appeared to be his colleagues.
Just a few months later, Ferrari found themselves in the midst of a similar phishing attempt. An executive from the luxury sports car manufacturer received a flurry of WhatsApp calls and messages that supposedly came from CEO Benedetto Vigna. The account seemed legit, even using the correct profile picture for Vigna, and urged the executive to complete a confidential transaction.
The voice in the messages matched Vigna’s accent, but there were moments where the pitch and cadence of his speech seemed suspicious. The executive decided to ask a question about a topic he’d discussed with Vigna a few days earlier. When the scammers were unable to answer, the scam unraveled.
AI Helps Thwart an AI-Enabled Attack
On August 18, 2025, Microsoft Threat Intelligence detected a phishing scam that utilized a compromised business email account to harvest credentials. The scammers spoofed email headers and attached a malicious file that, when opened, redirected to a CAPTCHA verification before landing on a fake login page.
An AI-enhanced analysis of the file’s code revealed an unusual method for hiding its malicious intentions. Rather than cryptographic obfuscation, the code used a combination of business terminology and element transparency.
This allowed the scammers to hide the payload’s functionality in what appeared to be just long sequences of business data. In reality, sequences of these terms were mapped to specific instructions. As the script runs, it decodes to carry out actions such as redirecting a user’s browser, enabling fingerprinting, and initiating session tracking.
Microsoft Security Copilot was able to help detect and block the phishing campaign before it transpired. Furthermore, Copilot was able to determine that the malicious file was likely created using a language learning model, based on the complexity and verbiage of the code.
Why AI Opponents Can’t Keep Their Head in the Sand?
There are several reasons why people view AI as a threat rather than an asset. For some, they fear their jobs could be replaced by a machine. For others, the hesitation stems from the vision of a sci-fi computer takeover. However, those who refuse to utilize the capabilities of AI are doomed to fall behind.
Experts suggest that AI should be implemented as a tool, not a replacement. It can be an extremely beneficial tool to enhance human work, especially on teams with a smaller workforce. The key is to ensure that the work relationship between AI and human employees is a collaborative effort.
“AI is a tool that can be used to empower rather than replace security pros,” said Caleb Sima, Chair of CSA AI Security Alliance. “In fact, a survey that CSA recently conducted with Google found that the majority of organizations plan to use AI to strengthen their teams, whether that means enhancing their skills and knowledge base or improving detection times and productivity, rather than replacing staff altogether.”
INTERESTING POSTS
- The Intersection of AI and Privacy: Safeguarding Personal Information in the Age of Intelligent Systems
- AI in Breach Detection Threat or Safeguard (or Both)
- Why Is Cybersecurity In Financial Services Important?
- What Are Phishing Scams And How You Can Avoid Them?
- How AI Can Help To Enhance Mobile Apps
- A Guide for Healthcare Businesses on Using New Technology
- AI Revolution: Protecting Your Cyber Future
About the Author:
Jack Gillespie is a cybersecurity content writer with experience in digital forensics and security analysis. He has been writing contributions for Digital Forensics which have provided him with opportunities to collaborate with industry professionals and gain deeper insights into the evolving cybersecurity landscape. Dedicated to exploring emerging threats and digital defense strategies, Jack continues to deliver clear and engaging content for readers seeking to understand the complexities of the cyber world.










