Attending RSA? Reserve your spot at Anzenna’s mixer on April 29—request your invite now.
Attending RSA? Reserve your spot at Anzenna’s mixer on April 29—request your invite now.

June 4, 2025

AI Security 101: What You Need to Know to Take Action

Chinmaya Sharma

Categories

Artificial Intelligence (AI) has introduced both new challenges and new opportunities to cyber security. On the one hand, cyber criminals leverage AI capabilities to create attacks that are more advanced and with wider scale than anything before. This is possible, since AI is a huge force multiplier for these hackers. We’re covering that aspect of cyber crime in a separate blog.

On the other hand, AI also provides a major force multiplier for cyber security defenders. Whenever defending networks, systems and data, AI enables cybersecurity vendors, and their customers, to present a good, solid security posture in the face of new AI threats. In other words, you must adopt and use AI to protect against AI risks. 

Security leaders and teams need great, actionable strategies to effectively implement AI security. These strategies must include aspects such as security operations, governance, compliance, and vulnerability management. In this blog, we’ll cover these aspects and suggest some thoughts on how to best address them.

Defining AI Security

‘AI security’ involves the strategies and tactics an organization must implement to protect both its AI systems and their data, from any and all cyber threats that are out there. AI security has two aspects:

  1. Security for AI: Focuses on protecting each component of the AI system – data, algorithms, and applications. Protection must be comprehensive, against all threats, including data breaches, unauthorized access, insider threat, etc. The AI systems must remain confidential, reliable, and with integrity; after all, AI plays an increasingly central role in organizations’ business operations.
  2. AI for Security: Utilizes AI technologies to improve cybersecurity protection. AI tools automate detection and response, mitigate human errors, and enable rapid threat response. For example, machine learning algorithms examine and highlight unusual patterns in huge datasets and enable them to more effectively identify potential cyber threats, compared to conventional techniques.

 

Both aspects must be incorporated for an organization to achieve robust AI security. When implemented correctly, Security Operations (SecOps) and Development Operations (DevOps) teams can effectively counter cybersecurity threats and gain operational efficiency.

Understanding AI and Machine Learning

IDC predicted, in December 2023, that 85% of CIOs would change how their organizations work by 2028. They will do so to better leverage technologies like AI, machine learning, and more.

The difference between AI and machine learning is:

  • Artificial Intelligence: Machines perform tasks that require human intelligence. Such tasks include learning, drawing conclusions, and solving problems.
  • Machine Learning: Machines perform a specific task and provide accurate results by learning from data, identifying patterns, and doing all that without explicit programming. Machine Learning is a subset of AI.

The rapid rise of these two concepts, especially since 2023, has put a new and significant burden on cybersecurity professionals. They now must develop new expertise in AI security, something they didn’t need before. Today’s cybersecurity professionals must understand AI and machine learning, to make sure their organizations use these securely in-house, and to protect themselves from external threats leveraging these technologies.

One new challenge, for example, is how to identify and overcome vulnerabilities in AI systems. Take for instance a machine learning model that was trained on biased datasets, either intentionally or unintentionally. Then, the ML model uses the same biases to make decisions. Such biased results can have both business and ethical consequences. On the business side, the ML model may make decisions that will harm the organization’s business. On the ethical side, biased decisions lead to problematic actions in fields like law enforcement or healthcare.

Cybersecurity professionals must understand AI and ML inside and out to prevent the above. This will help them evaluate risks, prevent biases, handle AI-related security incidents, and make sure that AI and ML help their organization rather than harm it.

Addressing Workforce Shortages in Cybersecurity

The global market has a severe shortage of skilled cybersecurity professionals. We’ve known it for years, it’s nothing new. ISC2 reports on that shortage every year; they’re a non-profit organization which specializes in training and certifications for cybersecurity professionals, so they’re experts on the subject. 

Back in 2022, ISC2 estimated a global shortage of 3.4 million cybersecurity professionals. The American National Institute of Standards and Technology brings many other studies from recent years about the huge shortage. Among these, NIST quotes another study, about a shortage of well over half a million cybersecurity professionals in the U.S. alone.

AI did not help to close that huge skill shortage. In fact, the exact opposite happened. Just two years later, in October 2024, ISC2 said in its annual report that the global cybersecurity workforce gap grew to nearly 4.8 million jobs!  While North America is doing relatively well, and is “only” missing a little over half a million skilled cybersecurity employees, the situation is worse in Asia-Pacific, with a shortage gap of more than 3.3 million cybersecurity professionals.

This huge workforce gap is not something trivial. It affects how organizations can defend themselves. It also affects how organizations implement new technologies, such as AI, which in many organizations is mandated by the board of directors. It would not be farfetched to say that many organizations are implementing AI in a subpar way, from a security perspective.

Many organizations face a tough dilemma. On the one hand, AI provides a huge promise to their business by automating repetitive tasks. By doing so, employees can focus on things that require human attention. On the other hand, many organizations don’t have enough cybersecurity personnel to implement and use AI in a secure manner. When they do implement AI, they expose themselves to significant risks.

Cybersecurity teams themselves face a similar challenge. On the one hand, AI-based defenses enable them to do their jobs faster and more effectively. On the other hand, new AI security tools require appropriate integration, training for the security team, and possibly also adapting some processes. Implementing all of that is challenging when you’re already short-staffed.

AI also brings with it new challenges. One of them, for example, is managing risks involving non-human identities. As machine-to-machine communication rapidly increases, it’s becoming paramount to safeguard these “identities”. This is one of many places where current regulations lag behind the astronomical advancements in technology. Without a clear framework, there are numerous risks related to these non-human identities – them being hijacked, impersonated, manipulated, etc. These risks allow attackers to bypass traditional security systems unnoticed. Gartner says that by next year, i.e. 2026, about 80% of organizations will struggle to manage non-human identities, and that will create huge risks involving breaches and compliance failures.

Why is AI security important?

In a survey held in early 2024 by software company Splunk, they surveyed 1,650 cybersecurity executives across the U.S., Japan, the U.K., France, Germany, and several additional countries. In the survey, 93% of executives said their companies had already deployed generative AI for business purposes, and 91% said they’d deployed AI within their security teams. However, 34% said they lacked a complete generative AI policy.

Moreover, when asked about their top security initiatives for 2024, AI came in first with 44% of executives choosing it as their top security initiative. Cloud security came second, with 35% of executives naming it as their top priority. 

When executives were asked whether AI would tip the scales in favor of defenders or adversaries, respondents were almost evenly divided – 45% predicted adversaries will benefit most, while 43% thought defenders will come out on top. This shows how even top cybersecurity leaders are divided between viewing AI as a threat or a benefit.

The conclusion: cyber threats stemming from AI security are real and are already here. These new risks put extra pressure on SecOps and DevOps teams. Organizations must proactively manage their environments, to take advantage of the opportunities that AI security presents. Else, cyber criminals will use these technologies to harm you in ways you’ve never experienced before.

I see four areas that must be prioritized for effective implementation of AI security. Of course, that’s only if you want to achieve optimal management of cyber risks (else ignore my suggestions):

  1. Sensitive and Regulated Data Protection
  2. AI Risk Mitigation
  3. Ethical and Regulatory Compliance
  4. Security Efficacy

Sensitive and Regulated Data Protection

AI systems love data. Lots of data. That’s why your AI systems attract cyber criminals, honey attracts bees. When hackers breach your AI system, it’s not just the system that’s at risk; it also breaks your customers’ trust in your company and causes serious damage to your brand. The numbers show that clearly. In 2024, the average cost of a data breach in the U.S. was nearly $9.4 million, according to IBM’s “Cost of a Data Breach Report 2024”. That number is not just a line item; it’s a business-critical event. To guard against this, you must have ultra-strong data protection. That’s non-negotiable. Encryption, Role-Based Access Control (RBAC), and rigorous security governance must be part of your foundational security.

AI Risk Mitigation

AI isn’t just another IT system – it introduces entirely new threat vectors. In my opinion, one of the most meaningful threats is model theft. It means hackers stealing your proprietary algorithms. Another new and meaningful threat is adversarial attacks, where attackers manipulate your training data, with the goal of derailing how your AI behaves, and the results it provides. These are brand new, sophisticated and high-impact threats – that your static defenses won’t be able to handle. To stay ahead of cyber criminals who have their eyes on you, your security strategies must evolve in tandem with AI innovation.

Ethical and Regulatory Compliance

AI systems often process sensitive data, like personal or regulated data. In other words, I’m referring to Personal Identifiable Information (PII), and additional types of sensitive data you don’t want leaked. GDPR, CCPA and other regulatory frameworks are not just guidelines; they’re legal guardrails and must be taken with outmost seriousness. It’s great if your compliance is such that you avoid penalties; however, true strong compliance is much more than that – it’s about maintaining transparency on how your AI decisions are made, so that you can avoid risks like bias. Else, biased models can cause real harm in areas like hiring, healthcare, and law enforcement. 

Security Efficacy

As I mentioned earlier, AI is both a risk factor but also a powerful partner. Machine learning systems detect anomalies in real-time and help security teams to identify and respond to threats in a faster way. Since today’s threat landscape evolves faster than ever, with new attack techniques constantly developing, you can use any partner you can get to fight against bad actors. The speed and precision offered by AI is an absolute game-changer, in that respect.

AI Security Risks

AI brings incredible potential, but it also opens the door to new risks that traditional security tools weren’t built to handle.

Data Poisoning and Adversarial Examples

Attackers can corrupt training datasets (data poisoning) or craft malicious inputs (adversarial examples) to throw off AI results. The consequences? Misguided decisions in highly-regulated sectors like healthcare, finance or public safety—and that’s a risk no one can afford.

Model Theft

When threat actors get access to your AI models, they’re not just stealing code—they’re stealing intellectual property. Worse, stolen models can be used to power everything from deepfakes to targeted cyberattacks.

Prompt Injection Attacks

Generative AI systems are especially vulnerable to prompt injection attacks. These manipulate model inputs to produce misleading or dangerous outputs. The more organizations rely on GenAI, the more they’ll need to secure it.

AI Supply Chain Risks

AI systems don’t operate in a vacuum. They depend on APIs, third-party models, open-source components—all potential attack vectors. Without strict supply chain controls, organizations risk importing vulnerabilities from third-party components and services straight into their systems.

Mitigating Risk Using AI Security Frameworks

Managing AI security risks starts with robust data governance. That means classifying, securing, and monitoring data throughout its lifecycle. As Gen-AI tools become more mainstream, governance gaps – like oversharing sensitive data – can become ticking time bombs.

RBAC is crucial here. Access to AI systems and datasets must be limited to those who truly need it. You should bring together teams responsible for identity, data security, compliance, and digital workplace tools – them working together will help close governance gaps and create a more unified front.

The sheer scale of the challenge is huge, and it keeps growing at a fast pace. Large enterprises face billions of cyber events each day. With some teams receiving 10,000 alerts daily, it’s clear we can’t rely on humans alone. That’s where smarter AI systems come in—to handle volume, detect true threats, and cut through the noise.

AI Security Strategies That Work

Want to make a real dent in security threats? AI and automation can resolve up to 85% of cyber alerts, according to IBM. Beyond efficiency, AI and automation also help compensate for the cybersecurity talent gap. 

Let’s break down how to secure AI across three key areas:

Data Security

  • Encrypt sensitive data, both at rest and when in transit. 
  • Enforce RBAC so only the right people have access to critical information. 
  • Continuously monitor for anomalies that may be possible threats.

Model Security

  • Validate sources to ensure model inputs and updates are trusted. 
  • Secure APIs and plugins to prevent exploitation. 
  • Harden models to protect them against manipulation or performance degradation.

Usage Security

  • Implement ethical guardrails to prevent any misuse of AI-generated content. 
  • Monitor in real-time to detect prompt injections, data leakage, or unexpected behaviors.
  • Use anomaly detection to flag anything that seems “off” within AI environments.

Emerging AI Security Tools to Know

As both the threat landscape and AI technologies continuously evolve, so do the tools we need to fight back:

  • Machine Learning Detection and Response (MLDR): These tools monitor AI systems at every stage of development, and flag security risks they identify. 
  • Security Orchestration, Automation, and Response (SOAR): These platforms automate threat detection and response and enable cybersecurity teams to handle incidents in a much faster and more efficient way.

No surprise, then, that a Salesforce survey, held in 2024 among hundreds of leaders in large Australian enterprises, found that 43% of executives saw increasing productivity as a main reason to adopt AI in security. It’s not about replacing humans – it’s about empowering them to do more, better.

Four Best Practices for AI Security

To stay ahead, organizations should anchor their AI security around four proven principles: 

  1. Enforce Governance Frameworks 

Work with compliance and GRC teams to ensure AI systems align with ethical standards, minimize bias, and meet legal requirements like GDPR. 

  1. Adopt the CIA Triad 

Keep Confidentiality, Integrity, and Availability front and center in all security decisions – it’s foundational for user trust and operational stability. 

  1. Secure the AI Lifecycle 

From training to deployment, embed security into every step. DevOps and SecOps teams should collaborate, protect the Continuous Integration / Continuous Delivery (CI/CD) pipelines, and enable continuous monitoring. 

  1. Promote Explainability and Trust 

Transparent, explainable AI models build trust, streamline debugging, and make it easier to prove compliance. In short, clarity leads to credibility.

The Road Ahead: Balancing Innovation and Security

In this blog, we covered two aspects of AI security. The first was how to protect AI systems from vulnerabilities. The second was how to use AI to improve your organization’s cybersecurity posture.

But AI security isn’t just about defense. It’s also about enabling safe, scalable innovation for your organization. Yes, we need to secure data, models, and systems – but we also need frameworks that evolve alongside technology. 

The goal isn’t to slow down AI adoption – heck no! The main message I wanted to convey in this blog is – let’s do it the right way.

By building security into the core of your AI strategies, your business can unlock massive potential – boosting productivity, streamlining decisions, and protecting what matters most in the process.

FAQ

What is AI security?

AI security includes two parts. The first is protecting AI systems, including models, applications and data, from online attacks. The second part is using AI to strengthen the organization’s overall cybersecurity defenses.

How does AI improve cybersecurity?

AI improves cybersecurity by automating detection and response systems. This automation lowers the possibility of human error and enables organizations to react much quicker to possible threats.

What are the main risks in AI security?

The main risks in AI security include data poisoning, adversarial attacks, model theft, and weaknesses in the AI supply chain. An organization that implements AI systems must address all these issues and implement proper safeguards.

How has Generative AI affected security?

AI has fundamentally changed the game with everything regarding security. The risks have increased significantly, and additional risks have been added such as data misuse, misleading outputs, and theft of intellectual property. All of these are creating the need for organizations to have stronger ethical and technical guardrails.

What’s the best way for us to prepare?

The best way to prepare is to take action early. Start with proactively implementing AI-specific security protocols, cross-functional governance, and continuous training for all employees. The sooner you act, the safer your organization will be.

Other Related Blogs

What is Generative AI in Cybersecurity

Nima

May 16, 2025

What is Cyber Security? The Different Types of Cybersecurity

Albert

May 7, 2025

What is Insider Risk and How to Manage It

Chinmaya Sharma

May 7, 2025