Skip to content

AI Security: Insights from the Security Journey Content Team

Published on

Artificial Intelligence (AI) is rapidly evolving, and with its widespread adoption, security concerns are at the forefront. The Security Journey content team recently embarked on an initiative to enhance their AI security training based on the latest OWASP guidance for AI Large Language Model (LLM) threats. In this blog, we’ll share key insights, challenges, and lessons learned from developing hands-on AI security content.

The Evolution of AI Security

The updated OWASP AI security guidance has taken a more technical approach compared to previous iterations. Initially, AI security discussions were broad and generalized. However, as AI adoption has skyrocketed, the vulnerabilities have become more nuanced, encompassing areas such as retrieval-augmented generation (RAG) weaknesses, vector embedding vulnerabilities, and the security of AI agents.

Security Journey’s content engineers, Noah and Tyler, spent significant time developing sandbox lessons that provide hands-on experience with these vulnerabilities. Noah observed, "It's really not just focused on the AI, but the whole system that AI actually integrates into. So we had to try to tackle that and teach that."

The Shift in OWASP’s AI Security List

One of the biggest observations from our content engineers was the shift in the OWASP AI security list. The first iteration cast a wide net, covering general risks. The new version consolidates and refines key security concerns, making them more actionable and technical.

Notable Changes:

  • Increased Technical Depth: The new guidance digs into how AI components interact with systems, highlighting the importance of securing not just the model but also the surrounding infrastructure.
  • A Greater Emphasis on AI Agents: The inclusion of security concerns related to AI agents and their interactions was a major highlight for our team.
  • The Removal of Model Theft as a Standalone Threat: While OWASP no longer lists model theft as a distinct category, it has been incorporated under broader risks. Tyler noted, "It makes sense, especially because we're dealing with integration from existing models rather than people training their own. But I do think model theft still deserves attention."

The Challenges of Teaching AI Security Hands-On

Developing hands-on AI security lessons presented a set of unique challenges, primarily due to the non-deterministic nature of AI models. Unlike traditional security vulnerabilities that produce consistent, predictable failures, AI models can behave inconsistently.

For example, when developing a model denial-of-service attack scenario, Noah faced significant hurdles: "We don't want users hitting the API 10,000 times or submitting massive amounts of text. So I had to design a way for them to still experience the attack while keeping our infrastructure safe."

Another challenge involved testing AI-generated responses. Since AI models may respond differently to the same input, writing test cases became significantly more complex. Tyler explained, "Sometimes the AI just decides to not cooperate. That makes it really hard to ensure lessons remain stable over time."

Key Takeaways for Developers Working with AI

As AI becomes increasingly integrated into software development, understanding its security implications is crucial. Our team identified several key takeaways:

  1. Don’t Skip the Fundamentals – AI doesn’t replace core security practices. Developers still need to apply fundamental security principles like input validation, least privilege access, and threat modeling.
  2. Understand What You’re Using – Many developers are leveraging AI-assisted coding tools, but without a solid security foundation, they may unknowingly introduce vulnerabilities.
  3. Validate AI Output – AI models can be confidently wrong. Always validate AI-generated responses, especially in critical applications.
  4. Prepare for Change – AI security is an evolving landscape. Models and vulnerabilities will continue to change, requiring ongoing education and adaptation.
  5. Control Your Data – Consider hosting models locally if privacy and security are top concerns. Many AI providers update terms frequently, which could impact how they handle your data.

Tyler summed it up well: "Do your research. AI makes things easy, but it's dangerous if you don’t know what’s happening under the hood."

Final Thoughts

AI security is a journey, not a destination. As the landscape continues to shift, staying ahead of threats requires continuous learning, research, and hands-on practice. At Security Journey, we remain committed to providing developers with engaging and practical AI security training to help them navigate this rapidly changing field.

If you’re a Security Journey customer, we encourage you to check out our latest AI security content. If not, now is the time to get on board and explore the cutting-edge lessons crafted by our expert team!