Artificial Intelligence (AI) is rapidly evolving, and with its widespread adoption, security concerns are at the forefront. The Security Journey content team recently embarked on an initiative to enhance their AI security training based on the latest OWASP guidance for AI Large Language Model (LLM) threats. In this blog, we’ll share key insights, challenges, and lessons learned from developing hands-on AI security content.
The updated OWASP AI security guidance has taken a more technical approach compared to previous iterations. Initially, AI security discussions were broad and generalized. However, as AI adoption has skyrocketed, the vulnerabilities have become more nuanced, encompassing areas such as retrieval-augmented generation (RAG) weaknesses, vector embedding vulnerabilities, and the security of AI agents.
Security Journey’s content engineers, Noah and Tyler, spent significant time developing sandbox lessons that provide hands-on experience with these vulnerabilities. Noah observed, "It's really not just focused on the AI, but the whole system that AI actually integrates into. So we had to try to tackle that and teach that."
One of the biggest observations from our content engineers was the shift in the OWASP AI security list. The first iteration cast a wide net, covering general risks. The new version consolidates and refines key security concerns, making them more actionable and technical.
Notable Changes:
Developing hands-on AI security lessons presented a set of unique challenges, primarily due to the non-deterministic nature of AI models. Unlike traditional security vulnerabilities that produce consistent, predictable failures, AI models can behave inconsistently.
For example, when developing a model denial-of-service attack scenario, Noah faced significant hurdles: "We don't want users hitting the API 10,000 times or submitting massive amounts of text. So I had to design a way for them to still experience the attack while keeping our infrastructure safe."
Another challenge involved testing AI-generated responses. Since AI models may respond differently to the same input, writing test cases became significantly more complex. Tyler explained, "Sometimes the AI just decides to not cooperate. That makes it really hard to ensure lessons remain stable over time."
As AI becomes increasingly integrated into software development, understanding its security implications is crucial. Our team identified several key takeaways:
Tyler summed it up well: "Do your research. AI makes things easy, but it's dangerous if you don’t know what’s happening under the hood."
AI security is a journey, not a destination. As the landscape continues to shift, staying ahead of threats requires continuous learning, research, and hands-on practice. At Security Journey, we remain committed to providing developers with engaging and practical AI security training to help them navigate this rapidly changing field.
If you’re a Security Journey customer, we encourage you to check out our latest AI security content. If not, now is the time to get on board and explore the cutting-edge lessons crafted by our expert team!