Michael Erquitt is a Senior Security Engineer at Security Journey who develops educational content for all of our learners.
Michael joined the podcast to discuss the AI Threat Landscape. The discussion starts with the history of the AI threat landscape before moving on to the biggest AI security changes of 2025 and the future of AI and AI security.
Artificial Intelligence (AI) has completely transformed how we work, innovate, and solve problems. It's opened up a world of possibilities, but as we move deeper into 2025, it's clear that the AI threat landscape is evolving just as quickly as the technology itself. Let’s dive into some of the challenges we’re facing and how we can tackle them head-on.
AI is a game-changer, but it’s not without its quirks. One of the more interesting challenges is prompt injection attacks. These attacks exploit how AI models process input, allowing attackers to manipulate systems in unexpected ways. More often than not they’re more of a nuisance than a critical vulnerability. However, they highlight the importance of securing AI systems. Implementing strong input validation and secure coding practices can help prevent these types of issues and ensure your AI models perform as intended.
Another issue we’ve got to talk about is overreliance on AI systems. We’ve all seen it: organizations jump on the AI bandwagon, leaning on these tools to automate everything and assuming they’re foolproof. But that’s dangerous. Whether it’s generating code, making business decisions, or automating operations, blind trust in AI can lead to trouble. We need to keep human oversight in the loop and stick to foundational security practices.
Let’s talk about AI agents for a second. These are specialized tools designed for specific tasks that work alongside general-purpose AI models. They’re incredible for productivity, but they also create new security challenges. For example, APIs used to connect these agents can become vulnerabilities if they’re not properly secured. And as these agents get smarter and more interconnected, the attack surface only grows.
There’s also this concept of context enrichment, where models are fed additional data to refine their outputs. It’s a powerful tool, but it comes with risks. If the data isn’t handled securely, it’s easy to imagine how sensitive information could be exposed.
Here’s the thing: no matter how much AI shakes things up, the fundamentals of application security still matter. A lot. Weak authentication, insecure APIs, poor input validation—these are issues we’ve been dealing with forever, and they’re not going away just because AI is in the mix. If anything, doubling down on the basics is one of the best ways to defend against AI-related risks.
The future isn’t without its challenges. Here are a few big ones to keep on your radar:
Let’s face it: AI isn’t going anywhere. It’s here to stay, and it’s going to keep changing how we work and live. But that doesn’t mean we should just sit back and let it take the wheel. We need to be intentional about how we use it: